2026-01-05 00:00:06.913288 | Job console starting 2026-01-05 00:00:06.945186 | Updating git repos 2026-01-05 00:00:07.222809 | Cloning repos into workspace 2026-01-05 00:00:07.495229 | Restoring repo states 2026-01-05 00:00:07.530313 | Merging changes 2026-01-05 00:00:07.530335 | Checking out repos 2026-01-05 00:00:07.892959 | Preparing playbooks 2026-01-05 00:00:09.203220 | Running Ansible setup 2026-01-05 00:00:18.892351 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-05 00:00:23.205354 | 2026-01-05 00:00:23.205573 | PLAY [Base pre] 2026-01-05 00:00:23.264179 | 2026-01-05 00:00:23.264345 | TASK [Setup log path fact] 2026-01-05 00:00:23.319494 | orchestrator | ok 2026-01-05 00:00:23.379721 | 2026-01-05 00:00:23.380797 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-05 00:00:23.508286 | orchestrator | ok 2026-01-05 00:00:23.580653 | 2026-01-05 00:00:23.581086 | TASK [emit-job-header : Print job information] 2026-01-05 00:00:23.774404 | # Job Information 2026-01-05 00:00:23.774610 | Ansible Version: 2.16.14 2026-01-05 00:00:23.774649 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-05 00:00:23.774684 | Pipeline: periodic-midnight 2026-01-05 00:00:23.774708 | Executor: 521e9411259a 2026-01-05 00:00:23.774729 | Triggered by: https://github.com/osism/testbed 2026-01-05 00:00:23.774753 | Event ID: 9a1e5e94553547229e870b2662f29864 2026-01-05 00:00:23.782137 | 2026-01-05 00:00:23.782276 | LOOP [emit-job-header : Print node information] 2026-01-05 00:00:24.806379 | orchestrator | ok: 2026-01-05 00:00:24.806686 | orchestrator | # Node Information 2026-01-05 00:00:24.806724 | orchestrator | Inventory Hostname: orchestrator 2026-01-05 00:00:24.806750 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-05 00:00:24.806772 | orchestrator | Username: zuul-testbed04 2026-01-05 00:00:24.806793 | orchestrator | Distro: Debian 12.12 2026-01-05 00:00:24.806817 | orchestrator | Provider: static-testbed 2026-01-05 00:00:24.806863 | orchestrator | Region: 2026-01-05 00:00:24.806898 | orchestrator | Label: testbed-orchestrator 2026-01-05 00:00:24.806920 | orchestrator | Product Name: OpenStack Nova 2026-01-05 00:00:24.806940 | orchestrator | Interface IP: 81.163.193.140 2026-01-05 00:00:24.841148 | 2026-01-05 00:00:24.848669 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-05 00:00:27.294369 | orchestrator -> localhost | changed 2026-01-05 00:00:27.304733 | 2026-01-05 00:00:27.304877 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-05 00:00:31.719567 | orchestrator -> localhost | changed 2026-01-05 00:00:31.751632 | 2026-01-05 00:00:31.751751 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-05 00:00:33.291000 | orchestrator -> localhost | ok 2026-01-05 00:00:33.298199 | 2026-01-05 00:00:33.298311 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-05 00:00:33.371354 | orchestrator | ok 2026-01-05 00:00:33.498179 | orchestrator | included: /var/lib/zuul/builds/aa3aa9c6cbca4062aefd45b6f753f4dc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-05 00:00:33.616131 | 2026-01-05 00:00:33.618063 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-05 00:00:40.078882 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-05 00:00:40.079048 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/aa3aa9c6cbca4062aefd45b6f753f4dc/work/aa3aa9c6cbca4062aefd45b6f753f4dc_id_rsa 2026-01-05 00:00:40.079080 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/aa3aa9c6cbca4062aefd45b6f753f4dc/work/aa3aa9c6cbca4062aefd45b6f753f4dc_id_rsa.pub 2026-01-05 00:00:40.079101 | orchestrator -> localhost | The key fingerprint is: 2026-01-05 00:00:40.079123 | orchestrator -> localhost | SHA256:E7eck8pFLVL/y+OfduGM2uI3Ip1kZVk3OvpLZZ50yBU zuul-build-sshkey 2026-01-05 00:00:40.079141 | orchestrator -> localhost | The key's randomart image is: 2026-01-05 00:00:40.079169 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-05 00:00:40.079187 | orchestrator -> localhost | | . E | 2026-01-05 00:00:40.079205 | orchestrator -> localhost | | . o oo| 2026-01-05 00:00:40.079222 | orchestrator -> localhost | | o + o + +| 2026-01-05 00:00:40.079238 | orchestrator -> localhost | | * = O o | 2026-01-05 00:00:40.079254 | orchestrator -> localhost | | S B + =+.| 2026-01-05 00:00:40.079272 | orchestrator -> localhost | | . + = .=oo| 2026-01-05 00:00:40.079288 | orchestrator -> localhost | | o + o.Bo.| 2026-01-05 00:00:40.079304 | orchestrator -> localhost | | . =o* =o| 2026-01-05 00:00:40.079321 | orchestrator -> localhost | | oo=o=oo| 2026-01-05 00:00:40.079338 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-05 00:00:40.079408 | orchestrator -> localhost | ok: Runtime: 0:00:03.892036 2026-01-05 00:00:40.086703 | 2026-01-05 00:00:40.086803 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-05 00:00:40.152272 | orchestrator | ok 2026-01-05 00:00:40.175733 | orchestrator | included: /var/lib/zuul/builds/aa3aa9c6cbca4062aefd45b6f753f4dc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-05 00:00:40.210858 | 2026-01-05 00:00:40.210962 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-05 00:00:40.283198 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:40.292837 | 2026-01-05 00:00:40.292929 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-05 00:00:41.506145 | orchestrator | changed 2026-01-05 00:00:41.513705 | 2026-01-05 00:00:41.513797 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-05 00:00:41.880914 | orchestrator | ok 2026-01-05 00:00:41.886219 | 2026-01-05 00:00:41.886308 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-05 00:00:42.429473 | orchestrator | ok 2026-01-05 00:00:42.434557 | 2026-01-05 00:00:42.434650 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-05 00:00:42.937549 | orchestrator | ok 2026-01-05 00:00:42.946153 | 2026-01-05 00:00:42.946243 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-05 00:00:43.034725 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:43.041458 | 2026-01-05 00:00:43.041562 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-05 00:00:44.714089 | orchestrator -> localhost | changed 2026-01-05 00:00:44.726176 | 2026-01-05 00:00:44.726277 | TASK [add-build-sshkey : Add back temp key] 2026-01-05 00:00:45.716967 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/aa3aa9c6cbca4062aefd45b6f753f4dc/work/aa3aa9c6cbca4062aefd45b6f753f4dc_id_rsa (zuul-build-sshkey) 2026-01-05 00:00:45.717149 | orchestrator -> localhost | ok: Runtime: 0:00:00.037563 2026-01-05 00:00:45.723198 | 2026-01-05 00:00:45.723286 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-05 00:00:46.449815 | orchestrator | ok 2026-01-05 00:00:46.454911 | 2026-01-05 00:00:46.455007 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-05 00:00:46.518172 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:46.615733 | 2026-01-05 00:00:46.615845 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-05 00:00:47.368639 | orchestrator | ok 2026-01-05 00:00:47.440136 | 2026-01-05 00:00:47.440239 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-05 00:00:47.574328 | orchestrator | ok 2026-01-05 00:00:47.666204 | 2026-01-05 00:00:47.666326 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-05 00:00:48.973728 | orchestrator -> localhost | ok 2026-01-05 00:00:48.980988 | 2026-01-05 00:00:48.981074 | TASK [validate-host : Collect information about the host] 2026-01-05 00:00:51.123514 | orchestrator | ok 2026-01-05 00:00:51.179047 | 2026-01-05 00:00:51.179149 | TASK [validate-host : Sanitize hostname] 2026-01-05 00:00:51.484597 | orchestrator | ok 2026-01-05 00:00:51.506283 | 2026-01-05 00:00:51.506426 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-05 00:00:52.895481 | orchestrator -> localhost | changed 2026-01-05 00:00:52.901408 | 2026-01-05 00:00:52.901495 | TASK [validate-host : Collect information about zuul worker] 2026-01-05 00:00:53.597807 | orchestrator | ok 2026-01-05 00:00:53.603927 | 2026-01-05 00:00:53.604017 | TASK [validate-host : Write out all zuul information for each host] 2026-01-05 00:00:55.472418 | orchestrator -> localhost | changed 2026-01-05 00:00:55.484466 | 2026-01-05 00:00:55.484554 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-05 00:00:55.910673 | orchestrator | ok 2026-01-05 00:00:55.916532 | 2026-01-05 00:00:55.916629 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-05 00:02:17.207038 | orchestrator | changed: 2026-01-05 00:02:17.207272 | orchestrator | .d..t...... src/ 2026-01-05 00:02:17.207308 | orchestrator | .d..t...... src/github.com/ 2026-01-05 00:02:17.207334 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-05 00:02:17.207356 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-05 00:02:17.207377 | orchestrator | RedHat.yml 2026-01-05 00:02:17.223954 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-05 00:02:17.223973 | orchestrator | RedHat.yml 2026-01-05 00:02:17.224028 | orchestrator | = 1.53.0"... 2026-01-05 00:02:28.561555 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-05 00:02:28.692471 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-05 00:02:29.229003 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-05 00:02:29.287229 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-05 00:02:29.996405 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-05 00:02:30.054555 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-05 00:02:30.591148 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-05 00:02:30.591221 | orchestrator | 2026-01-05 00:02:30.591228 | orchestrator | Providers are signed by their developers. 2026-01-05 00:02:30.591233 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-05 00:02:30.591245 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-05 00:02:30.591281 | orchestrator | 2026-01-05 00:02:30.591287 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-05 00:02:30.591291 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-05 00:02:30.591304 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-05 00:02:30.591315 | orchestrator | you run "tofu init" in the future. 2026-01-05 00:02:30.591718 | orchestrator | 2026-01-05 00:02:30.591759 | orchestrator | OpenTofu has been successfully initialized! 2026-01-05 00:02:30.591784 | orchestrator | 2026-01-05 00:02:30.591789 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-05 00:02:30.591794 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-05 00:02:30.591799 | orchestrator | should now work. 2026-01-05 00:02:30.591803 | orchestrator | 2026-01-05 00:02:30.591807 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-05 00:02:30.591811 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-05 00:02:30.591822 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-05 00:02:30.779491 | orchestrator | Created and switched to workspace "ci"! 2026-01-05 00:02:30.779602 | orchestrator | 2026-01-05 00:02:30.779618 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-05 00:02:30.779632 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-05 00:02:30.779644 | orchestrator | for this configuration. 2026-01-05 00:02:30.932343 | orchestrator | ci.auto.tfvars 2026-01-05 00:02:31.293960 | orchestrator | default_custom.tf 2026-01-05 00:02:37.437157 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-05 00:02:37.989358 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-05 00:02:39.297353 | orchestrator | 2026-01-05 00:02:39.297459 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-05 00:02:39.297474 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-05 00:02:39.297483 | orchestrator | + create 2026-01-05 00:02:39.297492 | orchestrator | <= read (data resources) 2026-01-05 00:02:39.297500 | orchestrator | 2026-01-05 00:02:39.297509 | orchestrator | OpenTofu will perform the following actions: 2026-01-05 00:02:39.297528 | orchestrator | 2026-01-05 00:02:39.297537 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-05 00:02:39.297545 | orchestrator | # (config refers to values not yet known) 2026-01-05 00:02:39.297553 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-05 00:02:39.297561 | orchestrator | + checksum = (known after apply) 2026-01-05 00:02:39.297568 | orchestrator | + created_at = (known after apply) 2026-01-05 00:02:39.297576 | orchestrator | + file = (known after apply) 2026-01-05 00:02:39.297583 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.297617 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.297625 | orchestrator | + min_disk_gb = (known after apply) 2026-01-05 00:02:39.297633 | orchestrator | + min_ram_mb = (known after apply) 2026-01-05 00:02:39.297640 | orchestrator | + most_recent = true 2026-01-05 00:02:39.297648 | orchestrator | + name = (known after apply) 2026-01-05 00:02:39.297655 | orchestrator | + protected = (known after apply) 2026-01-05 00:02:39.297662 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.297732 | orchestrator | + schema = (known after apply) 2026-01-05 00:02:39.297740 | orchestrator | + size_bytes = (known after apply) 2026-01-05 00:02:39.297747 | orchestrator | + tags = (known after apply) 2026-01-05 00:02:39.297754 | orchestrator | + updated_at = (known after apply) 2026-01-05 00:02:39.297762 | orchestrator | } 2026-01-05 00:02:39.297775 | orchestrator | 2026-01-05 00:02:39.297788 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-05 00:02:39.297801 | orchestrator | # (config refers to values not yet known) 2026-01-05 00:02:39.297812 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-05 00:02:39.297825 | orchestrator | + checksum = (known after apply) 2026-01-05 00:02:39.297837 | orchestrator | + created_at = (known after apply) 2026-01-05 00:02:39.297875 | orchestrator | + file = (known after apply) 2026-01-05 00:02:39.297887 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.297898 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.297909 | orchestrator | + min_disk_gb = (known after apply) 2026-01-05 00:02:39.297921 | orchestrator | + min_ram_mb = (known after apply) 2026-01-05 00:02:39.297933 | orchestrator | + most_recent = true 2026-01-05 00:02:39.297946 | orchestrator | + name = (known after apply) 2026-01-05 00:02:39.297957 | orchestrator | + protected = (known after apply) 2026-01-05 00:02:39.297968 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.297980 | orchestrator | + schema = (known after apply) 2026-01-05 00:02:39.297991 | orchestrator | + size_bytes = (known after apply) 2026-01-05 00:02:39.298003 | orchestrator | + tags = (known after apply) 2026-01-05 00:02:39.298123 | orchestrator | + updated_at = (known after apply) 2026-01-05 00:02:39.298143 | orchestrator | } 2026-01-05 00:02:39.298154 | orchestrator | 2026-01-05 00:02:39.298165 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-05 00:02:39.298178 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-05 00:02:39.298190 | orchestrator | + content = (known after apply) 2026-01-05 00:02:39.298199 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:39.298206 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:39.298213 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:39.298221 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:39.298228 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:39.298235 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:39.298243 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:39.298250 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:39.298257 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-05 00:02:39.298264 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.298272 | orchestrator | } 2026-01-05 00:02:39.298279 | orchestrator | 2026-01-05 00:02:39.298286 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-05 00:02:39.298293 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-05 00:02:39.298301 | orchestrator | + content = (known after apply) 2026-01-05 00:02:39.298308 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:39.298315 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:39.298322 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:39.298329 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:39.298336 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:39.298344 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:39.298351 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:39.298358 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:39.298377 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-05 00:02:39.298385 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.298392 | orchestrator | } 2026-01-05 00:02:39.298407 | orchestrator | 2026-01-05 00:02:39.298424 | orchestrator | # local_file.inventory will be created 2026-01-05 00:02:39.298431 | orchestrator | + resource "local_file" "inventory" { 2026-01-05 00:02:39.298439 | orchestrator | + content = (known after apply) 2026-01-05 00:02:39.298446 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:39.298454 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:39.298461 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:39.298468 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:39.298476 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:39.298484 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:39.298491 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:39.298498 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:39.298506 | orchestrator | + filename = "inventory.ci" 2026-01-05 00:02:39.298513 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.298520 | orchestrator | } 2026-01-05 00:02:39.298527 | orchestrator | 2026-01-05 00:02:39.298535 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-05 00:02:39.298542 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-05 00:02:39.298549 | orchestrator | + content = (sensitive value) 2026-01-05 00:02:39.298557 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:39.298564 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:39.298571 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:39.298578 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:39.298586 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:39.298593 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:39.298600 | orchestrator | + directory_permission = "0700" 2026-01-05 00:02:39.298607 | orchestrator | + file_permission = "0600" 2026-01-05 00:02:39.298615 | orchestrator | + filename = ".id_rsa.ci" 2026-01-05 00:02:39.298622 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.298629 | orchestrator | } 2026-01-05 00:02:39.298636 | orchestrator | 2026-01-05 00:02:39.298644 | orchestrator | # null_resource.node_semaphore will be created 2026-01-05 00:02:39.298651 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-05 00:02:39.298658 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.298665 | orchestrator | } 2026-01-05 00:02:39.298673 | orchestrator | 2026-01-05 00:02:39.298680 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-05 00:02:39.298688 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-05 00:02:39.298695 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.298702 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.298710 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.298717 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.298724 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.298731 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-05 00:02:39.298739 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.298746 | orchestrator | + size = 80 2026-01-05 00:02:39.298753 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.298760 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.298768 | orchestrator | } 2026-01-05 00:02:39.298775 | orchestrator | 2026-01-05 00:02:39.298782 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-05 00:02:39.298790 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:39.298797 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.298804 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.298812 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.298823 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.298831 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.298838 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-05 00:02:39.298846 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.298878 | orchestrator | + size = 80 2026-01-05 00:02:39.298891 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.298903 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.298915 | orchestrator | } 2026-01-05 00:02:39.298923 | orchestrator | 2026-01-05 00:02:39.298930 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-05 00:02:39.298937 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:39.298945 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.298952 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.298959 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.298966 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.298974 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.298981 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-05 00:02:39.298988 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.298995 | orchestrator | + size = 80 2026-01-05 00:02:39.299003 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.299011 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.299018 | orchestrator | } 2026-01-05 00:02:39.299025 | orchestrator | 2026-01-05 00:02:39.299033 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-05 00:02:39.299040 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:39.299047 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.299054 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.299063 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.299075 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.299087 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.299100 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-05 00:02:39.299111 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.299123 | orchestrator | + size = 80 2026-01-05 00:02:39.299135 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.299146 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.299157 | orchestrator | } 2026-01-05 00:02:39.299175 | orchestrator | 2026-01-05 00:02:39.299188 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-05 00:02:39.299199 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:39.299210 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.299221 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.299233 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.299244 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.299256 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.299276 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-05 00:02:39.299289 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.299301 | orchestrator | + size = 80 2026-01-05 00:02:39.299313 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.299326 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.299337 | orchestrator | } 2026-01-05 00:02:39.299344 | orchestrator | 2026-01-05 00:02:39.299357 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-05 00:02:39.299369 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:39.299382 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.299394 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.299407 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.299430 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.299438 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.299445 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-05 00:02:39.299453 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.299460 | orchestrator | + size = 80 2026-01-05 00:02:39.299467 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.299475 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.299482 | orchestrator | } 2026-01-05 00:02:39.299489 | orchestrator | 2026-01-05 00:02:39.299497 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-05 00:02:39.299504 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:39.299511 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.299518 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.299526 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.299533 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.299540 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.299548 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-05 00:02:39.299555 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.299562 | orchestrator | + size = 80 2026-01-05 00:02:39.299570 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.299577 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.299584 | orchestrator | } 2026-01-05 00:02:39.299592 | orchestrator | 2026-01-05 00:02:39.299599 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-05 00:02:39.299607 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:39.299614 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.299622 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.299629 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.299636 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.299644 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-05 00:02:39.299651 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.299659 | orchestrator | + size = 20 2026-01-05 00:02:39.299666 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.299676 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.299688 | orchestrator | } 2026-01-05 00:02:39.299701 | orchestrator | 2026-01-05 00:02:39.299712 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-05 00:02:39.299723 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:39.299735 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.299747 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.299759 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.299771 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.299782 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-05 00:02:39.299793 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.299805 | orchestrator | + size = 20 2026-01-05 00:02:39.299817 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.299829 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.299842 | orchestrator | } 2026-01-05 00:02:39.299872 | orchestrator | 2026-01-05 00:02:39.299885 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-05 00:02:39.299897 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:39.299909 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.299922 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.299935 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.299948 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.299959 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-05 00:02:39.299971 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.299991 | orchestrator | + size = 20 2026-01-05 00:02:39.300002 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.300015 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.300027 | orchestrator | } 2026-01-05 00:02:39.300040 | orchestrator | 2026-01-05 00:02:39.300052 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-05 00:02:39.300063 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:39.300074 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.300086 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.300098 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.300111 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.300123 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-05 00:02:39.300136 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.300149 | orchestrator | + size = 20 2026-01-05 00:02:39.300160 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.300173 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.300184 | orchestrator | } 2026-01-05 00:02:39.300196 | orchestrator | 2026-01-05 00:02:39.300221 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-05 00:02:39.300235 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:39.300248 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.300260 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.300272 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.300285 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.300298 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-05 00:02:39.300309 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.300330 | orchestrator | + size = 20 2026-01-05 00:02:39.300343 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.300354 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.300366 | orchestrator | } 2026-01-05 00:02:39.300377 | orchestrator | 2026-01-05 00:02:39.300388 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-05 00:02:39.300400 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:39.300411 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.300423 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.300436 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.300447 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.300459 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-05 00:02:39.300470 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.300482 | orchestrator | + size = 20 2026-01-05 00:02:39.300494 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.300507 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.300520 | orchestrator | } 2026-01-05 00:02:39.300532 | orchestrator | 2026-01-05 00:02:39.300543 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-05 00:02:39.300556 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:39.300568 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.300580 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.300592 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.300604 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.300616 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-05 00:02:39.300629 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.300641 | orchestrator | + size = 20 2026-01-05 00:02:39.300653 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.300664 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.300676 | orchestrator | } 2026-01-05 00:02:39.300688 | orchestrator | 2026-01-05 00:02:39.300700 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-05 00:02:39.300712 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:39.300735 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.300748 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.300760 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.300773 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.300784 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-05 00:02:39.300796 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.300808 | orchestrator | + size = 20 2026-01-05 00:02:39.300820 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.300833 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.300844 | orchestrator | } 2026-01-05 00:02:39.300889 | orchestrator | 2026-01-05 00:02:39.300901 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-05 00:02:39.300913 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:39.300925 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:39.300937 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.300949 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.300961 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:39.300972 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-05 00:02:39.300983 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.300995 | orchestrator | + size = 20 2026-01-05 00:02:39.301007 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:39.301019 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:39.301030 | orchestrator | } 2026-01-05 00:02:39.301041 | orchestrator | 2026-01-05 00:02:39.301052 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-05 00:02:39.301063 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-05 00:02:39.301074 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:39.301086 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:39.301098 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:39.301111 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.301123 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.301134 | orchestrator | + config_drive = true 2026-01-05 00:02:39.301146 | orchestrator | + created = (known after apply) 2026-01-05 00:02:39.301158 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:39.301169 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-05 00:02:39.301181 | orchestrator | + force_delete = false 2026-01-05 00:02:39.301193 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:39.301205 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.301217 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.301228 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:39.301240 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:39.301252 | orchestrator | + name = "testbed-manager" 2026-01-05 00:02:39.301263 | orchestrator | + power_state = "active" 2026-01-05 00:02:39.301275 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.301287 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:39.301299 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:39.301311 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:39.301323 | orchestrator | + user_data = (sensitive value) 2026-01-05 00:02:39.301335 | orchestrator | 2026-01-05 00:02:39.301348 | orchestrator | + block_device { 2026-01-05 00:02:39.301359 | orchestrator | + boot_index = 0 2026-01-05 00:02:39.301371 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:39.301391 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:39.301403 | orchestrator | + multiattach = false 2026-01-05 00:02:39.301414 | orchestrator | + source_type = "volume" 2026-01-05 00:02:39.301441 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.301465 | orchestrator | } 2026-01-05 00:02:39.301478 | orchestrator | 2026-01-05 00:02:39.301490 | orchestrator | + network { 2026-01-05 00:02:39.301502 | orchestrator | + access_network = false 2026-01-05 00:02:39.301513 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:39.301526 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:39.301537 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:39.301550 | orchestrator | + name = (known after apply) 2026-01-05 00:02:39.301561 | orchestrator | + port = (known after apply) 2026-01-05 00:02:39.301573 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.301584 | orchestrator | } 2026-01-05 00:02:39.301595 | orchestrator | } 2026-01-05 00:02:39.301607 | orchestrator | 2026-01-05 00:02:39.301619 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-05 00:02:39.301631 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:39.301642 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:39.301653 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:39.301665 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:39.301676 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.301688 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.301699 | orchestrator | + config_drive = true 2026-01-05 00:02:39.301711 | orchestrator | + created = (known after apply) 2026-01-05 00:02:39.301722 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:39.301734 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:39.301746 | orchestrator | + force_delete = false 2026-01-05 00:02:39.301757 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:39.301769 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.301782 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.301794 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:39.301807 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:39.301819 | orchestrator | + name = "testbed-node-0" 2026-01-05 00:02:39.301830 | orchestrator | + power_state = "active" 2026-01-05 00:02:39.301842 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.301880 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:39.301892 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:39.301904 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:39.301915 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:39.301927 | orchestrator | 2026-01-05 00:02:39.301939 | orchestrator | + block_device { 2026-01-05 00:02:39.301951 | orchestrator | + boot_index = 0 2026-01-05 00:02:39.301963 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:39.301976 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:39.301988 | orchestrator | + multiattach = false 2026-01-05 00:02:39.302000 | orchestrator | + source_type = "volume" 2026-01-05 00:02:39.302043 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.302060 | orchestrator | } 2026-01-05 00:02:39.302072 | orchestrator | 2026-01-05 00:02:39.302084 | orchestrator | + network { 2026-01-05 00:02:39.302096 | orchestrator | + access_network = false 2026-01-05 00:02:39.302108 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:39.302121 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:39.302132 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:39.302144 | orchestrator | + name = (known after apply) 2026-01-05 00:02:39.302156 | orchestrator | + port = (known after apply) 2026-01-05 00:02:39.302168 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.302181 | orchestrator | } 2026-01-05 00:02:39.302193 | orchestrator | } 2026-01-05 00:02:39.302205 | orchestrator | 2026-01-05 00:02:39.302218 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-05 00:02:39.302230 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:39.302243 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:39.302270 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:39.302284 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:39.302297 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.302314 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.302326 | orchestrator | + config_drive = true 2026-01-05 00:02:39.302339 | orchestrator | + created = (known after apply) 2026-01-05 00:02:39.302350 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:39.302362 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:39.302374 | orchestrator | + force_delete = false 2026-01-05 00:02:39.302381 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:39.302388 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.302396 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.302403 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:39.302410 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:39.302417 | orchestrator | + name = "testbed-node-1" 2026-01-05 00:02:39.302424 | orchestrator | + power_state = "active" 2026-01-05 00:02:39.302432 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.302439 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:39.302446 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:39.302453 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:39.302461 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:39.302468 | orchestrator | 2026-01-05 00:02:39.302475 | orchestrator | + block_device { 2026-01-05 00:02:39.302483 | orchestrator | + boot_index = 0 2026-01-05 00:02:39.302490 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:39.302497 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:39.302504 | orchestrator | + multiattach = false 2026-01-05 00:02:39.302511 | orchestrator | + source_type = "volume" 2026-01-05 00:02:39.302518 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.302526 | orchestrator | } 2026-01-05 00:02:39.302533 | orchestrator | 2026-01-05 00:02:39.302540 | orchestrator | + network { 2026-01-05 00:02:39.302548 | orchestrator | + access_network = false 2026-01-05 00:02:39.302555 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:39.302562 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:39.302569 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:39.302576 | orchestrator | + name = (known after apply) 2026-01-05 00:02:39.302583 | orchestrator | + port = (known after apply) 2026-01-05 00:02:39.302591 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.302598 | orchestrator | } 2026-01-05 00:02:39.302605 | orchestrator | } 2026-01-05 00:02:39.302612 | orchestrator | 2026-01-05 00:02:39.302632 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-05 00:02:39.302639 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:39.302646 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:39.302654 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:39.302666 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:39.302678 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.302699 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.302712 | orchestrator | + config_drive = true 2026-01-05 00:02:39.302725 | orchestrator | + created = (known after apply) 2026-01-05 00:02:39.302738 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:39.302749 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:39.302762 | orchestrator | + force_delete = false 2026-01-05 00:02:39.302770 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:39.302779 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.302792 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.302807 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:39.302814 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:39.302821 | orchestrator | + name = "testbed-node-2" 2026-01-05 00:02:39.302829 | orchestrator | + power_state = "active" 2026-01-05 00:02:39.302836 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.302843 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:39.302876 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:39.302884 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:39.302891 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:39.302899 | orchestrator | 2026-01-05 00:02:39.302907 | orchestrator | + block_device { 2026-01-05 00:02:39.302919 | orchestrator | + boot_index = 0 2026-01-05 00:02:39.302931 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:39.302943 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:39.302954 | orchestrator | + multiattach = false 2026-01-05 00:02:39.302966 | orchestrator | + source_type = "volume" 2026-01-05 00:02:39.302979 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.302992 | orchestrator | } 2026-01-05 00:02:39.303004 | orchestrator | 2026-01-05 00:02:39.303016 | orchestrator | + network { 2026-01-05 00:02:39.303030 | orchestrator | + access_network = false 2026-01-05 00:02:39.303043 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:39.303055 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:39.303068 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:39.303081 | orchestrator | + name = (known after apply) 2026-01-05 00:02:39.303093 | orchestrator | + port = (known after apply) 2026-01-05 00:02:39.303106 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.303119 | orchestrator | } 2026-01-05 00:02:39.303131 | orchestrator | } 2026-01-05 00:02:39.303144 | orchestrator | 2026-01-05 00:02:39.303156 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-05 00:02:39.303169 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:39.303181 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:39.303194 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:39.303207 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:39.303220 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.303232 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.303244 | orchestrator | + config_drive = true 2026-01-05 00:02:39.303256 | orchestrator | + created = (known after apply) 2026-01-05 00:02:39.303268 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:39.303280 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:39.303292 | orchestrator | + force_delete = false 2026-01-05 00:02:39.303303 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:39.303316 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.303328 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.303338 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:39.303351 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:39.303363 | orchestrator | + name = "testbed-node-3" 2026-01-05 00:02:39.303375 | orchestrator | + power_state = "active" 2026-01-05 00:02:39.303387 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.303399 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:39.303411 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:39.303423 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:39.303435 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:39.303447 | orchestrator | 2026-01-05 00:02:39.303459 | orchestrator | + block_device { 2026-01-05 00:02:39.303482 | orchestrator | + boot_index = 0 2026-01-05 00:02:39.303491 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:39.303498 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:39.303511 | orchestrator | + multiattach = false 2026-01-05 00:02:39.303522 | orchestrator | + source_type = "volume" 2026-01-05 00:02:39.303534 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.303547 | orchestrator | } 2026-01-05 00:02:39.303558 | orchestrator | 2026-01-05 00:02:39.303571 | orchestrator | + network { 2026-01-05 00:02:39.303584 | orchestrator | + access_network = false 2026-01-05 00:02:39.303596 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:39.303608 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:39.303620 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:39.303632 | orchestrator | + name = (known after apply) 2026-01-05 00:02:39.303643 | orchestrator | + port = (known after apply) 2026-01-05 00:02:39.303654 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.303666 | orchestrator | } 2026-01-05 00:02:39.303677 | orchestrator | } 2026-01-05 00:02:39.303688 | orchestrator | 2026-01-05 00:02:39.303701 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-05 00:02:39.303713 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:39.303725 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:39.303737 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:39.303750 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:39.303762 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.303774 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.303786 | orchestrator | + config_drive = true 2026-01-05 00:02:39.303798 | orchestrator | + created = (known after apply) 2026-01-05 00:02:39.303818 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:39.303831 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:39.303842 | orchestrator | + force_delete = false 2026-01-05 00:02:39.303873 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:39.303886 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.303898 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.303911 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:39.303923 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:39.303935 | orchestrator | + name = "testbed-node-4" 2026-01-05 00:02:39.303947 | orchestrator | + power_state = "active" 2026-01-05 00:02:39.303960 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.303972 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:39.303981 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:39.303988 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:39.303996 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:39.304003 | orchestrator | 2026-01-05 00:02:39.304010 | orchestrator | + block_device { 2026-01-05 00:02:39.304018 | orchestrator | + boot_index = 0 2026-01-05 00:02:39.304025 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:39.304032 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:39.304039 | orchestrator | + multiattach = false 2026-01-05 00:02:39.304047 | orchestrator | + source_type = "volume" 2026-01-05 00:02:39.304054 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.304061 | orchestrator | } 2026-01-05 00:02:39.304068 | orchestrator | 2026-01-05 00:02:39.304075 | orchestrator | + network { 2026-01-05 00:02:39.304083 | orchestrator | + access_network = false 2026-01-05 00:02:39.304090 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:39.304097 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:39.304104 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:39.304111 | orchestrator | + name = (known after apply) 2026-01-05 00:02:39.304119 | orchestrator | + port = (known after apply) 2026-01-05 00:02:39.304126 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.304133 | orchestrator | } 2026-01-05 00:02:39.304140 | orchestrator | } 2026-01-05 00:02:39.304154 | orchestrator | 2026-01-05 00:02:39.304162 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-05 00:02:39.304169 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:39.304177 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:39.304184 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:39.304191 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:39.304198 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.304205 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:39.304213 | orchestrator | + config_drive = true 2026-01-05 00:02:39.304220 | orchestrator | + created = (known after apply) 2026-01-05 00:02:39.304227 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:39.304234 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:39.304241 | orchestrator | + force_delete = false 2026-01-05 00:02:39.304254 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:39.304261 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.304268 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:39.304276 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:39.304283 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:39.304290 | orchestrator | + name = "testbed-node-5" 2026-01-05 00:02:39.304297 | orchestrator | + power_state = "active" 2026-01-05 00:02:39.304305 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.304312 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:39.304319 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:39.304326 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:39.304334 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:39.304341 | orchestrator | 2026-01-05 00:02:39.304348 | orchestrator | + block_device { 2026-01-05 00:02:39.304356 | orchestrator | + boot_index = 0 2026-01-05 00:02:39.304363 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:39.304370 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:39.304377 | orchestrator | + multiattach = false 2026-01-05 00:02:39.304384 | orchestrator | + source_type = "volume" 2026-01-05 00:02:39.304391 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.304399 | orchestrator | } 2026-01-05 00:02:39.304406 | orchestrator | 2026-01-05 00:02:39.304413 | orchestrator | + network { 2026-01-05 00:02:39.304420 | orchestrator | + access_network = false 2026-01-05 00:02:39.304428 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:39.304435 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:39.304442 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:39.304450 | orchestrator | + name = (known after apply) 2026-01-05 00:02:39.304457 | orchestrator | + port = (known after apply) 2026-01-05 00:02:39.304464 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:39.304471 | orchestrator | } 2026-01-05 00:02:39.304479 | orchestrator | } 2026-01-05 00:02:39.304486 | orchestrator | 2026-01-05 00:02:39.304493 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-05 00:02:39.304501 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-05 00:02:39.304508 | orchestrator | + fingerprint = (known after apply) 2026-01-05 00:02:39.304519 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.304531 | orchestrator | + name = "testbed" 2026-01-05 00:02:39.304543 | orchestrator | + private_key = (sensitive value) 2026-01-05 00:02:39.304554 | orchestrator | + public_key = (known after apply) 2026-01-05 00:02:39.304566 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.304576 | orchestrator | + user_id = (known after apply) 2026-01-05 00:02:39.304587 | orchestrator | } 2026-01-05 00:02:39.304599 | orchestrator | 2026-01-05 00:02:39.304611 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-05 00:02:39.304625 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:39.304644 | orchestrator | + device = (known after apply) 2026-01-05 00:02:39.304654 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.304666 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:39.304678 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.304690 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:39.304702 | orchestrator | } 2026-01-05 00:02:39.304714 | orchestrator | 2026-01-05 00:02:39.304726 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-05 00:02:39.304746 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:39.304759 | orchestrator | + device = (known after apply) 2026-01-05 00:02:39.304770 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.304783 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:39.304796 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.304808 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:39.304820 | orchestrator | } 2026-01-05 00:02:39.304833 | orchestrator | 2026-01-05 00:02:39.304845 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-05 00:02:39.304975 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:39.304986 | orchestrator | + device = (known after apply) 2026-01-05 00:02:39.304993 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305001 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:39.305007 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305014 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:39.305020 | orchestrator | } 2026-01-05 00:02:39.305026 | orchestrator | 2026-01-05 00:02:39.305033 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-05 00:02:39.305039 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:39.305045 | orchestrator | + device = (known after apply) 2026-01-05 00:02:39.305051 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305057 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:39.305064 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305070 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:39.305076 | orchestrator | } 2026-01-05 00:02:39.305083 | orchestrator | 2026-01-05 00:02:39.305089 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-05 00:02:39.305096 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:39.305102 | orchestrator | + device = (known after apply) 2026-01-05 00:02:39.305108 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305114 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:39.305126 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305132 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:39.305138 | orchestrator | } 2026-01-05 00:02:39.305144 | orchestrator | 2026-01-05 00:02:39.305151 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-05 00:02:39.305157 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:39.305163 | orchestrator | + device = (known after apply) 2026-01-05 00:02:39.305169 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305176 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:39.305182 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305188 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:39.305194 | orchestrator | } 2026-01-05 00:02:39.305200 | orchestrator | 2026-01-05 00:02:39.305207 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-05 00:02:39.305213 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:39.305219 | orchestrator | + device = (known after apply) 2026-01-05 00:02:39.305225 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305231 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:39.305237 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305254 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:39.305261 | orchestrator | } 2026-01-05 00:02:39.305267 | orchestrator | 2026-01-05 00:02:39.305273 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-05 00:02:39.305279 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:39.305285 | orchestrator | + device = (known after apply) 2026-01-05 00:02:39.305292 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305298 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:39.305304 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305310 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:39.305316 | orchestrator | } 2026-01-05 00:02:39.305323 | orchestrator | 2026-01-05 00:02:39.305329 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-05 00:02:39.305335 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:39.305341 | orchestrator | + device = (known after apply) 2026-01-05 00:02:39.305348 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305354 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:39.305360 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305366 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:39.305372 | orchestrator | } 2026-01-05 00:02:39.305378 | orchestrator | 2026-01-05 00:02:39.305385 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-05 00:02:39.305392 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-05 00:02:39.305398 | orchestrator | + fixed_ip = (known after apply) 2026-01-05 00:02:39.305404 | orchestrator | + floating_ip = (known after apply) 2026-01-05 00:02:39.305411 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305417 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:39.305423 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305429 | orchestrator | } 2026-01-05 00:02:39.305435 | orchestrator | 2026-01-05 00:02:39.305441 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-05 00:02:39.305448 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-05 00:02:39.305454 | orchestrator | + address = (known after apply) 2026-01-05 00:02:39.305460 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.305466 | orchestrator | + dns_domain = (known after apply) 2026-01-05 00:02:39.305472 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:39.305478 | orchestrator | + fixed_ip = (known after apply) 2026-01-05 00:02:39.305485 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305491 | orchestrator | + pool = "public" 2026-01-05 00:02:39.305497 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:39.305503 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305509 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:39.305515 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.305522 | orchestrator | } 2026-01-05 00:02:39.305528 | orchestrator | 2026-01-05 00:02:39.305534 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-05 00:02:39.305540 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-05 00:02:39.305547 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:39.305553 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.305569 | orchestrator | + availability_zone_hints = [ 2026-01-05 00:02:39.305576 | orchestrator | + "nova", 2026-01-05 00:02:39.305582 | orchestrator | ] 2026-01-05 00:02:39.305588 | orchestrator | + dns_domain = (known after apply) 2026-01-05 00:02:39.305595 | orchestrator | + external = (known after apply) 2026-01-05 00:02:39.305601 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305607 | orchestrator | + mtu = (known after apply) 2026-01-05 00:02:39.305613 | orchestrator | + name = "net-testbed-management" 2026-01-05 00:02:39.305619 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:39.305630 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:39.305636 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305642 | orchestrator | + shared = (known after apply) 2026-01-05 00:02:39.305648 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.305655 | orchestrator | + transparent_vlan = (known after apply) 2026-01-05 00:02:39.305661 | orchestrator | 2026-01-05 00:02:39.305667 | orchestrator | + segments (known after apply) 2026-01-05 00:02:39.305673 | orchestrator | } 2026-01-05 00:02:39.305679 | orchestrator | 2026-01-05 00:02:39.305686 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-05 00:02:39.305692 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-05 00:02:39.305698 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:39.305704 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:39.305710 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:39.305720 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.305726 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:39.305732 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:39.305738 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:39.305744 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:39.305751 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305757 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:39.305763 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:39.305769 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:39.305775 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:39.305781 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.305788 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:39.305794 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.305800 | orchestrator | 2026-01-05 00:02:39.305806 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.305812 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:39.305818 | orchestrator | } 2026-01-05 00:02:39.305825 | orchestrator | 2026-01-05 00:02:39.305831 | orchestrator | + binding (known after apply) 2026-01-05 00:02:39.305837 | orchestrator | 2026-01-05 00:02:39.305843 | orchestrator | + fixed_ip { 2026-01-05 00:02:39.305872 | orchestrator | + ip_address = "192.168.16.5" 2026-01-05 00:02:39.305879 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:39.305885 | orchestrator | } 2026-01-05 00:02:39.305891 | orchestrator | } 2026-01-05 00:02:39.305897 | orchestrator | 2026-01-05 00:02:39.305904 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-05 00:02:39.305910 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:39.305916 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:39.305923 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:39.305929 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:39.305935 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.305941 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:39.305948 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:39.305954 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:39.305960 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:39.305966 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.305972 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:39.305979 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:39.305985 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:39.305991 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:39.305997 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.306008 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:39.306037 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.306044 | orchestrator | 2026-01-05 00:02:39.306050 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306056 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:39.306062 | orchestrator | } 2026-01-05 00:02:39.306069 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306075 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:39.306081 | orchestrator | } 2026-01-05 00:02:39.306087 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306093 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:39.306100 | orchestrator | } 2026-01-05 00:02:39.306113 | orchestrator | 2026-01-05 00:02:39.306120 | orchestrator | + binding (known after apply) 2026-01-05 00:02:39.306126 | orchestrator | 2026-01-05 00:02:39.306133 | orchestrator | + fixed_ip { 2026-01-05 00:02:39.306139 | orchestrator | + ip_address = "192.168.16.10" 2026-01-05 00:02:39.306145 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:39.306151 | orchestrator | } 2026-01-05 00:02:39.306158 | orchestrator | } 2026-01-05 00:02:39.306164 | orchestrator | 2026-01-05 00:02:39.306170 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-05 00:02:39.306176 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:39.306183 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:39.306189 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:39.306198 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:39.306208 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.306217 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:39.306224 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:39.306230 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:39.306236 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:39.306243 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.306249 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:39.306261 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:39.306274 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:39.306281 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:39.306287 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.306293 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:39.306299 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.306306 | orchestrator | 2026-01-05 00:02:39.306312 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306318 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:39.306324 | orchestrator | } 2026-01-05 00:02:39.306331 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306337 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:39.306343 | orchestrator | } 2026-01-05 00:02:39.306349 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306355 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:39.306361 | orchestrator | } 2026-01-05 00:02:39.306368 | orchestrator | 2026-01-05 00:02:39.306374 | orchestrator | + binding (known after apply) 2026-01-05 00:02:39.306380 | orchestrator | 2026-01-05 00:02:39.306386 | orchestrator | + fixed_ip { 2026-01-05 00:02:39.306392 | orchestrator | + ip_address = "192.168.16.11" 2026-01-05 00:02:39.306399 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:39.306405 | orchestrator | } 2026-01-05 00:02:39.306411 | orchestrator | } 2026-01-05 00:02:39.306417 | orchestrator | 2026-01-05 00:02:39.306423 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-05 00:02:39.306429 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:39.306436 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:39.306442 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:39.306448 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:39.306454 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.306465 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:39.306472 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:39.306478 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:39.306484 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:39.306494 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.306500 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:39.306506 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:39.306513 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:39.306519 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:39.306525 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.306531 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:39.306537 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.306543 | orchestrator | 2026-01-05 00:02:39.306550 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306556 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:39.306562 | orchestrator | } 2026-01-05 00:02:39.306568 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306574 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:39.306581 | orchestrator | } 2026-01-05 00:02:39.306587 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306593 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:39.306599 | orchestrator | } 2026-01-05 00:02:39.306605 | orchestrator | 2026-01-05 00:02:39.306611 | orchestrator | + binding (known after apply) 2026-01-05 00:02:39.306618 | orchestrator | 2026-01-05 00:02:39.306624 | orchestrator | + fixed_ip { 2026-01-05 00:02:39.306630 | orchestrator | + ip_address = "192.168.16.12" 2026-01-05 00:02:39.306637 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:39.306643 | orchestrator | } 2026-01-05 00:02:39.306649 | orchestrator | } 2026-01-05 00:02:39.306655 | orchestrator | 2026-01-05 00:02:39.306661 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-05 00:02:39.306668 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:39.306674 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:39.306680 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:39.306686 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:39.306692 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.306699 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:39.306705 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:39.306711 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:39.306717 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:39.306723 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.306730 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:39.306736 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:39.306742 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:39.306748 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:39.306754 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.306761 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:39.306767 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.306773 | orchestrator | 2026-01-05 00:02:39.306779 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306785 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:39.306792 | orchestrator | } 2026-01-05 00:02:39.306798 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306804 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:39.306810 | orchestrator | } 2026-01-05 00:02:39.306816 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.306823 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:39.306829 | orchestrator | } 2026-01-05 00:02:39.306835 | orchestrator | 2026-01-05 00:02:39.306845 | orchestrator | + binding (known after apply) 2026-01-05 00:02:39.306885 | orchestrator | 2026-01-05 00:02:39.306892 | orchestrator | + fixed_ip { 2026-01-05 00:02:39.306898 | orchestrator | + ip_address = "192.168.16.13" 2026-01-05 00:02:39.306904 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:39.306911 | orchestrator | } 2026-01-05 00:02:39.306917 | orchestrator | } 2026-01-05 00:02:39.306923 | orchestrator | 2026-01-05 00:02:39.306929 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-05 00:02:39.306935 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:39.306942 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:39.306948 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:39.306954 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:39.306960 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.306967 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:39.306973 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:39.306979 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:39.306985 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:39.306997 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.307004 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:39.307010 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:39.307016 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:39.307023 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:39.307029 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.307035 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:39.307041 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.307049 | orchestrator | 2026-01-05 00:02:39.307055 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.307062 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:39.307068 | orchestrator | } 2026-01-05 00:02:39.307074 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.307080 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:39.307087 | orchestrator | } 2026-01-05 00:02:39.307093 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.307099 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:39.307105 | orchestrator | } 2026-01-05 00:02:39.307111 | orchestrator | 2026-01-05 00:02:39.307118 | orchestrator | + binding (known after apply) 2026-01-05 00:02:39.307124 | orchestrator | 2026-01-05 00:02:39.307130 | orchestrator | + fixed_ip { 2026-01-05 00:02:39.307136 | orchestrator | + ip_address = "192.168.16.14" 2026-01-05 00:02:39.307142 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:39.307149 | orchestrator | } 2026-01-05 00:02:39.307155 | orchestrator | } 2026-01-05 00:02:39.307161 | orchestrator | 2026-01-05 00:02:39.307167 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-05 00:02:39.307174 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:39.307180 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:39.307186 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:39.307192 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:39.307199 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.307205 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:39.307211 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:39.307217 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:39.307223 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:39.307229 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.307236 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:39.307242 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:39.307248 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:39.307254 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:39.307265 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.307272 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:39.307278 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.307284 | orchestrator | 2026-01-05 00:02:39.307290 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.307297 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:39.307303 | orchestrator | } 2026-01-05 00:02:39.307309 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.307315 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:39.307321 | orchestrator | } 2026-01-05 00:02:39.307328 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:39.307334 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:39.307340 | orchestrator | } 2026-01-05 00:02:39.307346 | orchestrator | 2026-01-05 00:02:39.307356 | orchestrator | + binding (known after apply) 2026-01-05 00:02:39.307363 | orchestrator | 2026-01-05 00:02:39.307369 | orchestrator | + fixed_ip { 2026-01-05 00:02:39.307375 | orchestrator | + ip_address = "192.168.16.15" 2026-01-05 00:02:39.307381 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:39.307387 | orchestrator | } 2026-01-05 00:02:39.307393 | orchestrator | } 2026-01-05 00:02:39.307400 | orchestrator | 2026-01-05 00:02:39.307406 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-05 00:02:39.307412 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-05 00:02:39.307418 | orchestrator | + force_destroy = false 2026-01-05 00:02:39.307425 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.307431 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:39.307437 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.307443 | orchestrator | + router_id = (known after apply) 2026-01-05 00:02:39.307449 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:39.307455 | orchestrator | } 2026-01-05 00:02:39.307462 | orchestrator | 2026-01-05 00:02:39.307468 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-05 00:02:39.307474 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-05 00:02:39.307480 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:39.307486 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.307493 | orchestrator | + availability_zone_hints = [ 2026-01-05 00:02:39.307499 | orchestrator | + "nova", 2026-01-05 00:02:39.307505 | orchestrator | ] 2026-01-05 00:02:39.307511 | orchestrator | + distributed = (known after apply) 2026-01-05 00:02:39.307517 | orchestrator | + enable_snat = (known after apply) 2026-01-05 00:02:39.307523 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-05 00:02:39.307530 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-05 00:02:39.307536 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.307542 | orchestrator | + name = "testbed" 2026-01-05 00:02:39.307548 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.307554 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.307561 | orchestrator | 2026-01-05 00:02:39.307567 | orchestrator | + external_fixed_ip (known after apply) 2026-01-05 00:02:39.307573 | orchestrator | } 2026-01-05 00:02:39.307579 | orchestrator | 2026-01-05 00:02:39.307586 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-05 00:02:39.307592 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-05 00:02:39.307598 | orchestrator | + description = "ssh" 2026-01-05 00:02:39.307604 | orchestrator | + direction = "ingress" 2026-01-05 00:02:39.307610 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:39.307616 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.307623 | orchestrator | + port_range_max = 22 2026-01-05 00:02:39.307629 | orchestrator | + port_range_min = 22 2026-01-05 00:02:39.307635 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:39.307641 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.307654 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:39.307661 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:39.307667 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:39.307677 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:39.307684 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.307690 | orchestrator | } 2026-01-05 00:02:39.307696 | orchestrator | 2026-01-05 00:02:39.307703 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-05 00:02:39.307709 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-05 00:02:39.307715 | orchestrator | + description = "wireguard" 2026-01-05 00:02:39.307721 | orchestrator | + direction = "ingress" 2026-01-05 00:02:39.307728 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:39.307734 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.307740 | orchestrator | + port_range_max = 51820 2026-01-05 00:02:39.307746 | orchestrator | + port_range_min = 51820 2026-01-05 00:02:39.307752 | orchestrator | + protocol = "udp" 2026-01-05 00:02:39.307759 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.307765 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:39.307771 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:39.307777 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:39.307783 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:39.307789 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.307796 | orchestrator | } 2026-01-05 00:02:39.307802 | orchestrator | 2026-01-05 00:02:39.307808 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-05 00:02:39.307814 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-05 00:02:39.307821 | orchestrator | + direction = "ingress" 2026-01-05 00:02:39.307827 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:39.307833 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.307839 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:39.307846 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.307865 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:39.307871 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:39.307877 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-05 00:02:39.307883 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:39.307890 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.307896 | orchestrator | } 2026-01-05 00:02:39.307902 | orchestrator | 2026-01-05 00:02:39.307908 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-05 00:02:39.307915 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-05 00:02:39.307921 | orchestrator | + direction = "ingress" 2026-01-05 00:02:39.307927 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:39.307933 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.307939 | orchestrator | + protocol = "udp" 2026-01-05 00:02:39.307996 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.308003 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:39.308010 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:39.308016 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-05 00:02:39.308022 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:39.308060 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.308068 | orchestrator | } 2026-01-05 00:02:39.308074 | orchestrator | 2026-01-05 00:02:39.308080 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-05 00:02:39.308115 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-05 00:02:39.308122 | orchestrator | + direction = "ingress" 2026-01-05 00:02:39.308129 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:39.308135 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.308141 | orchestrator | + protocol = "icmp" 2026-01-05 00:02:39.308147 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.308153 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:39.308159 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:39.308166 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:39.308172 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:39.308178 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.308184 | orchestrator | } 2026-01-05 00:02:39.308190 | orchestrator | 2026-01-05 00:02:39.308197 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-05 00:02:39.308203 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-05 00:02:39.308209 | orchestrator | + direction = "ingress" 2026-01-05 00:02:39.308215 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:39.308221 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.308228 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:39.308234 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.308240 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:39.308250 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:39.308257 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:39.308263 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:39.308269 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.308275 | orchestrator | } 2026-01-05 00:02:39.308281 | orchestrator | 2026-01-05 00:02:39.308288 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-05 00:02:39.308294 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-05 00:02:39.308300 | orchestrator | + direction = "ingress" 2026-01-05 00:02:39.308306 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:39.308313 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.308319 | orchestrator | + protocol = "udp" 2026-01-05 00:02:39.308325 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.308331 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:39.308344 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:39.308350 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:39.308357 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:39.308363 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.308369 | orchestrator | } 2026-01-05 00:02:39.308375 | orchestrator | 2026-01-05 00:02:39.308382 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-05 00:02:39.308388 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-05 00:02:39.308394 | orchestrator | + direction = "ingress" 2026-01-05 00:02:39.308404 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:39.308410 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.308417 | orchestrator | + protocol = "icmp" 2026-01-05 00:02:39.308423 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.308429 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:39.308435 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:39.308442 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:39.308448 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:39.308454 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.308465 | orchestrator | } 2026-01-05 00:02:39.308471 | orchestrator | 2026-01-05 00:02:39.308477 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-05 00:02:39.308484 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-05 00:02:39.308490 | orchestrator | + description = "vrrp" 2026-01-05 00:02:39.308496 | orchestrator | + direction = "ingress" 2026-01-05 00:02:39.308502 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:39.308508 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.308515 | orchestrator | + protocol = "112" 2026-01-05 00:02:39.308521 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.308527 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:39.308533 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:39.308539 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:39.308546 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:39.308552 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.308558 | orchestrator | } 2026-01-05 00:02:39.308564 | orchestrator | 2026-01-05 00:02:39.308595 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-05 00:02:39.308603 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-05 00:02:39.308609 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.308615 | orchestrator | + description = "management security group" 2026-01-05 00:02:39.308621 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.308627 | orchestrator | + name = "testbed-management" 2026-01-05 00:02:39.308634 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.308640 | orchestrator | + stateful = (known after apply) 2026-01-05 00:02:39.308646 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.308652 | orchestrator | } 2026-01-05 00:02:39.308658 | orchestrator | 2026-01-05 00:02:39.308665 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-05 00:02:39.308671 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-05 00:02:39.308677 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.308683 | orchestrator | + description = "node security group" 2026-01-05 00:02:39.308690 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.308696 | orchestrator | + name = "testbed-node" 2026-01-05 00:02:39.308702 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.308708 | orchestrator | + stateful = (known after apply) 2026-01-05 00:02:39.308714 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.308721 | orchestrator | } 2026-01-05 00:02:39.308727 | orchestrator | 2026-01-05 00:02:39.308733 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-05 00:02:39.308739 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-05 00:02:39.308746 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:39.308752 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-05 00:02:39.308758 | orchestrator | + dns_nameservers = [ 2026-01-05 00:02:39.308764 | orchestrator | + "8.8.8.8", 2026-01-05 00:02:39.308771 | orchestrator | + "9.9.9.9", 2026-01-05 00:02:39.308777 | orchestrator | ] 2026-01-05 00:02:39.308783 | orchestrator | + enable_dhcp = true 2026-01-05 00:02:39.308790 | orchestrator | + gateway_ip = (known after apply) 2026-01-05 00:02:39.308796 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.308802 | orchestrator | + ip_version = 4 2026-01-05 00:02:39.308808 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-05 00:02:39.308815 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-05 00:02:39.308821 | orchestrator | + name = "subnet-testbed-management" 2026-01-05 00:02:39.308827 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:39.308833 | orchestrator | + no_gateway = false 2026-01-05 00:02:39.308839 | orchestrator | + region = (known after apply) 2026-01-05 00:02:39.308846 | orchestrator | + service_types = (known after apply) 2026-01-05 00:02:39.309022 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:39.309029 | orchestrator | 2026-01-05 00:02:39.309035 | orchestrator | + allocation_pool { 2026-01-05 00:02:39.309041 | orchestrator | + end = "192.168.31.250" 2026-01-05 00:02:39.309048 | orchestrator | + start = "192.168.31.200" 2026-01-05 00:02:39.309054 | orchestrator | } 2026-01-05 00:02:39.309060 | orchestrator | } 2026-01-05 00:02:39.309066 | orchestrator | 2026-01-05 00:02:39.309072 | orchestrator | # terraform_data.image will be created 2026-01-05 00:02:39.309079 | orchestrator | + resource "terraform_data" "image" { 2026-01-05 00:02:39.309085 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.309091 | orchestrator | + input = "Ubuntu 24.04" 2026-01-05 00:02:39.309097 | orchestrator | + output = (known after apply) 2026-01-05 00:02:39.309103 | orchestrator | } 2026-01-05 00:02:39.309110 | orchestrator | 2026-01-05 00:02:39.309116 | orchestrator | # terraform_data.image_node will be created 2026-01-05 00:02:39.309122 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-05 00:02:39.309128 | orchestrator | + id = (known after apply) 2026-01-05 00:02:39.309134 | orchestrator | + input = "Ubuntu 24.04" 2026-01-05 00:02:39.309140 | orchestrator | + output = (known after apply) 2026-01-05 00:02:39.309147 | orchestrator | } 2026-01-05 00:02:39.309153 | orchestrator | 2026-01-05 00:02:39.309159 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-05 00:02:39.309165 | orchestrator | 2026-01-05 00:02:39.309171 | orchestrator | Changes to Outputs: 2026-01-05 00:02:39.309177 | orchestrator | + manager_address = (sensitive value) 2026-01-05 00:02:39.309184 | orchestrator | + private_key = (sensitive value) 2026-01-05 00:02:39.578917 | orchestrator | terraform_data.image: Creating... 2026-01-05 00:02:39.579947 | orchestrator | terraform_data.image: Creation complete after 0s [id=7cb1ffcd-a7e0-6c9f-f344-79e797596bed] 2026-01-05 00:02:39.580306 | orchestrator | terraform_data.image_node: Creating... 2026-01-05 00:02:39.580840 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=bcb5113d-d779-404a-a2db-93be87e4c020] 2026-01-05 00:02:39.599680 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-05 00:02:39.603238 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-05 00:02:39.608185 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-05 00:02:39.609196 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-05 00:02:39.610292 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-05 00:02:39.611097 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-05 00:02:39.611771 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-05 00:02:39.612375 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-05 00:02:39.612544 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-05 00:02:39.612667 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-05 00:02:40.067231 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-05 00:02:40.082903 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-05 00:02:40.084516 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-05 00:02:40.089704 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-05 00:02:40.141380 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-01-05 00:02:40.149472 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-05 00:02:40.885763 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=3beff355-1612-41ea-a50d-47c7d83d05a5] 2026-01-05 00:02:40.895195 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-05 00:02:43.262779 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=03b7017e-e1b0-457d-9587-8b11f2102bb3] 2026-01-05 00:02:43.273696 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-05 00:02:43.285276 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=b121dca3-24d1-4b7b-930a-60908a09b3ff] 2026-01-05 00:02:43.292030 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-05 00:02:43.314412 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=b9761713-1df3-4432-b4b3-360f49d55392] 2026-01-05 00:02:43.316441 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=ecd5c862-499b-48c6-9c2d-7fcffc72f10c] 2026-01-05 00:02:43.322335 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-05 00:02:43.333776 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-05 00:02:43.335936 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=f8dcabc6-fabd-45fd-9c41-4607b08934e9] 2026-01-05 00:02:43.339673 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=bafec76472d1d3bacdd8e21ea567750ffc28050e] 2026-01-05 00:02:43.350206 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-05 00:02:43.350581 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-05 00:02:43.383541 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=891598f0-de5b-4bdc-89c5-6a431d2de302] 2026-01-05 00:02:43.389875 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-05 00:02:43.398182 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=b0dfd45a-7f89-49c0-be70-a4c437682b52] 2026-01-05 00:02:43.410384 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-05 00:02:43.413993 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=cae250cee3942b3b8aaef4eccb2a52859aca6d2c] 2026-01-05 00:02:43.420104 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-05 00:02:43.424063 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=2b1c1f48-cee6-4c03-87f8-c43c8286bcc4] 2026-01-05 00:02:43.428114 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=383c4b06-6a59-4554-8f50-cd156928eda0] 2026-01-05 00:02:44.333935 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=727f4bd2-452c-4e7f-894b-3e4d9665288d] 2026-01-05 00:02:44.504128 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=aefec0e8-ed5b-4712-b1a0-c7ea00f00dcb] 2026-01-05 00:02:44.504211 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-05 00:02:46.722954 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=4705a6e7-7472-4153-8b28-61d97fe23078] 2026-01-05 00:02:46.736185 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11] 2026-01-05 00:02:46.784774 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=796c4eb8-0610-4712-8614-781cad59caeb] 2026-01-05 00:02:46.793769 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=27d06f6a-b839-4b4f-97f0-cacfc59b2589] 2026-01-05 00:02:46.874645 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=47b5dff2-66dd-4733-9974-3b39262202ed] 2026-01-05 00:02:47.593464 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 5s [id=afb8d460-827e-407a-9ee0-f351bfc1cb1b] 2026-01-05 00:02:48.393157 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=c38d247b-b547-4cf5-b9f4-91af7c73e04d] 2026-01-05 00:02:48.401694 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-05 00:02:48.404982 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-05 00:02:48.407939 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-05 00:02:48.616622 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=8568cfa9-d1e8-4f2a-b94b-113c73cacaf0] 2026-01-05 00:02:48.635637 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-05 00:02:48.636697 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-05 00:02:48.640330 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-05 00:02:48.640450 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-05 00:02:48.641191 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-05 00:02:48.642414 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-05 00:02:48.645701 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-05 00:02:48.651031 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-05 00:02:48.659239 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=adee947a-50df-4859-aaa9-26312951e223] 2026-01-05 00:02:48.667576 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-05 00:02:49.374792 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=321c1eb9-93d2-4bc4-8136-8f4f3fe6fe49] 2026-01-05 00:02:49.386429 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-05 00:02:49.431415 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=244e9f84-abae-4d5b-8292-79b28165f23c] 2026-01-05 00:02:49.441694 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-05 00:02:49.806989 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=cfc5bbba-97c3-434e-b9d8-11a2f685ae39] 2026-01-05 00:02:49.814494 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-05 00:02:50.015377 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=a1404bd9-8dbe-4e48-b20a-ca065fc5dac2] 2026-01-05 00:02:50.019545 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-05 00:02:50.113885 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=4fe40d07-b713-4170-8583-91f1cce0f833] 2026-01-05 00:02:50.120805 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-05 00:02:50.205802 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=00ec9ff0-4207-42e9-81fb-457363579b78] 2026-01-05 00:02:50.222926 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-05 00:02:50.623964 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=710fa76e-fc82-423f-a233-78758952a41a] 2026-01-05 00:02:50.630694 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-05 00:02:50.636015 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=3fc9cf55-4a2b-454b-8153-4912932b8dd4] 2026-01-05 00:02:50.877407 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=f7bf34ed-d974-4a04-b42d-8c8f758f020b] 2026-01-05 00:02:51.041146 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=534c6c66-3d01-48a5-821f-01231b792d40] 2026-01-05 00:02:51.195043 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=22cc9365-c3af-4c68-9a5d-fed209fb04a8] 2026-01-05 00:02:51.306417 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=15591aa2-6cfe-49c2-bc2d-a237ff238f13] 2026-01-05 00:02:51.351630 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=8aeec703-208d-41a1-96b3-3d3ac9755dbf] 2026-01-05 00:02:51.377794 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=9677aece-8792-4586-a627-21c17ae2018d] 2026-01-05 00:02:51.775214 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 3s [id=b76583dc-5463-4c45-bc39-ec191266b940] 2026-01-05 00:02:51.888554 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=55c5771b-82d6-47f4-aa2d-404926ad0118] 2026-01-05 00:02:53.088059 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=fcdf6b38-c4a8-4203-ab6f-4dfb685743c8] 2026-01-05 00:02:53.111696 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-05 00:02:53.126152 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-05 00:02:53.128237 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-05 00:02:53.131509 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-05 00:02:53.131962 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-05 00:02:53.132518 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-05 00:02:53.145930 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-05 00:02:54.699755 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=ef9548bf-0c3d-4d93-a061-9a51d2b56cd2] 2026-01-05 00:02:54.707570 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-05 00:02:54.715422 | orchestrator | local_file.inventory: Creating... 2026-01-05 00:02:54.716679 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-05 00:02:54.720231 | orchestrator | local_file.inventory: Creation complete after 0s [id=7e413b440a00470a9238bec1315e4451f0e19ce0] 2026-01-05 00:02:54.726481 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=e7b5bfa78c40ae24d94246393d750cce4b2674b5] 2026-01-05 00:02:55.603144 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=ef9548bf-0c3d-4d93-a061-9a51d2b56cd2] 2026-01-05 00:03:03.127056 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-05 00:03:03.134390 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-05 00:03:03.134486 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-05 00:03:03.134506 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-05 00:03:03.136681 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-05 00:03:03.153152 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-05 00:03:13.136436 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-05 00:03:13.136559 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-05 00:03:13.136572 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-05 00:03:13.136582 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-05 00:03:13.137526 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-05 00:03:13.154238 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-05 00:03:13.764486 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=8eb97f2a-1b99-4446-a05b-86e2f2594188] 2026-01-05 00:03:13.940985 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=61b27fdd-c416-4f2d-9397-a369abd02350] 2026-01-05 00:03:14.002580 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=0301f87b-6755-4231-95f1-c212cc8c065b] 2026-01-05 00:03:23.145773 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-05 00:03:23.145933 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-05 00:03:23.155320 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-05 00:03:24.054178 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=31c599b6-7669-4870-aa37-c717ff824927] 2026-01-05 00:03:24.089718 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=93ba081c-1c31-4a29-b94a-f8494d85b83b] 2026-01-05 00:03:24.130316 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=35bd06d8-eb77-45c5-8c35-f05a8ad93504] 2026-01-05 00:03:24.148164 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-05 00:03:24.154496 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=6869226677558448378] 2026-01-05 00:03:24.167065 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-05 00:03:24.172905 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-05 00:03:24.178160 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-05 00:03:24.178694 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-05 00:03:24.178888 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-05 00:03:24.182161 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-05 00:03:24.194073 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-05 00:03:24.195795 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-05 00:03:24.211640 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-05 00:03:24.214898 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-05 00:03:27.592859 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=61b27fdd-c416-4f2d-9397-a369abd02350/383c4b06-6a59-4554-8f50-cd156928eda0] 2026-01-05 00:03:27.599616 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=31c599b6-7669-4870-aa37-c717ff824927/ecd5c862-499b-48c6-9c2d-7fcffc72f10c] 2026-01-05 00:03:27.620426 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=93ba081c-1c31-4a29-b94a-f8494d85b83b/891598f0-de5b-4bdc-89c5-6a431d2de302] 2026-01-05 00:03:27.626335 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=31c599b6-7669-4870-aa37-c717ff824927/03b7017e-e1b0-457d-9587-8b11f2102bb3] 2026-01-05 00:03:27.646099 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=61b27fdd-c416-4f2d-9397-a369abd02350/b9761713-1df3-4432-b4b3-360f49d55392] 2026-01-05 00:03:27.661433 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=93ba081c-1c31-4a29-b94a-f8494d85b83b/b121dca3-24d1-4b7b-930a-60908a09b3ff] 2026-01-05 00:03:33.728924 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=31c599b6-7669-4870-aa37-c717ff824927/b0dfd45a-7f89-49c0-be70-a4c437682b52] 2026-01-05 00:03:33.735942 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=61b27fdd-c416-4f2d-9397-a369abd02350/2b1c1f48-cee6-4c03-87f8-c43c8286bcc4] 2026-01-05 00:03:33.765177 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=93ba081c-1c31-4a29-b94a-f8494d85b83b/f8dcabc6-fabd-45fd-9c41-4607b08934e9] 2026-01-05 00:03:34.215729 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-05 00:03:44.223489 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-05 00:03:44.829426 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=9275f069-c641-46fa-99d5-cc4fef8ac284] 2026-01-05 00:03:44.854370 | orchestrator | 2026-01-05 00:03:44.854491 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-05 00:03:44.854508 | orchestrator | 2026-01-05 00:03:44.854522 | orchestrator | Outputs: 2026-01-05 00:03:44.854535 | orchestrator | 2026-01-05 00:03:44.854545 | orchestrator | manager_address = 2026-01-05 00:03:44.854555 | orchestrator | private_key = 2026-01-05 00:03:45.115058 | orchestrator | ok: Runtime: 0:01:16.526090 2026-01-05 00:03:45.149089 | 2026-01-05 00:03:45.149228 | TASK [Fetch manager address] 2026-01-05 00:03:45.746033 | orchestrator | ok 2026-01-05 00:03:45.756804 | 2026-01-05 00:03:45.756942 | TASK [Set manager_host address] 2026-01-05 00:03:45.842206 | orchestrator | ok 2026-01-05 00:03:45.849727 | 2026-01-05 00:03:45.849848 | LOOP [Update ansible collections] 2026-01-05 00:03:47.206305 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:03:47.206663 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-05 00:03:47.206705 | orchestrator | Starting galaxy collection install process 2026-01-05 00:03:47.206731 | orchestrator | Process install dependency map 2026-01-05 00:03:47.206753 | orchestrator | Starting collection install process 2026-01-05 00:03:47.206774 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-01-05 00:03:47.206796 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-01-05 00:03:47.207270 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-05 00:03:47.207342 | orchestrator | ok: Item: commons Runtime: 0:00:01.010083 2026-01-05 00:03:48.509790 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:03:48.509973 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-05 00:03:48.510027 | orchestrator | Starting galaxy collection install process 2026-01-05 00:03:48.510069 | orchestrator | Process install dependency map 2026-01-05 00:03:48.510106 | orchestrator | Starting collection install process 2026-01-05 00:03:48.510142 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-01-05 00:03:48.510179 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-01-05 00:03:48.510214 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-05 00:03:48.510277 | orchestrator | ok: Item: services Runtime: 0:00:00.921581 2026-01-05 00:03:48.523052 | 2026-01-05 00:03:48.523216 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-05 00:03:59.194692 | orchestrator | ok 2026-01-05 00:03:59.204460 | 2026-01-05 00:03:59.204600 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-05 00:04:59.243300 | orchestrator | ok 2026-01-05 00:04:59.253112 | 2026-01-05 00:04:59.253282 | TASK [Fetch manager ssh hostkey] 2026-01-05 00:05:00.846134 | orchestrator | Output suppressed because no_log was given 2026-01-05 00:05:00.859320 | 2026-01-05 00:05:00.859502 | TASK [Get ssh keypair from terraform environment] 2026-01-05 00:05:01.400049 | orchestrator | ok: Runtime: 0:00:00.007853 2026-01-05 00:05:01.409226 | 2026-01-05 00:05:01.409415 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-05 00:05:01.454582 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-05 00:05:01.462787 | 2026-01-05 00:05:01.462963 | TASK [Run manager part 0] 2026-01-05 00:05:02.600464 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:05:02.654726 | orchestrator | 2026-01-05 00:05:02.654787 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-05 00:05:02.654794 | orchestrator | 2026-01-05 00:05:02.654809 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-05 00:05:04.561081 | orchestrator | ok: [testbed-manager] 2026-01-05 00:05:04.561154 | orchestrator | 2026-01-05 00:05:04.561181 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-05 00:05:04.561191 | orchestrator | 2026-01-05 00:05:04.561200 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:05:06.499616 | orchestrator | ok: [testbed-manager] 2026-01-05 00:05:06.499717 | orchestrator | 2026-01-05 00:05:06.499739 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-05 00:05:07.190477 | orchestrator | ok: [testbed-manager] 2026-01-05 00:05:07.190553 | orchestrator | 2026-01-05 00:05:07.190563 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-05 00:05:07.237553 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.237633 | orchestrator | 2026-01-05 00:05:07.237645 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-05 00:05:07.274068 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.274143 | orchestrator | 2026-01-05 00:05:07.274154 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-05 00:05:07.306346 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.306413 | orchestrator | 2026-01-05 00:05:07.306420 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-05 00:05:07.346923 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.346991 | orchestrator | 2026-01-05 00:05:07.346997 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-05 00:05:07.377576 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.377660 | orchestrator | 2026-01-05 00:05:07.377670 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-05 00:05:07.415318 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.415397 | orchestrator | 2026-01-05 00:05:07.415407 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-05 00:05:07.463164 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:07.463262 | orchestrator | 2026-01-05 00:05:07.463276 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-05 00:05:08.241297 | orchestrator | changed: [testbed-manager] 2026-01-05 00:05:08.241388 | orchestrator | 2026-01-05 00:05:08.241406 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-05 00:07:57.680426 | orchestrator | changed: [testbed-manager] 2026-01-05 00:07:57.680533 | orchestrator | 2026-01-05 00:07:57.680554 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-05 00:09:20.074177 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:20.074286 | orchestrator | 2026-01-05 00:09:20.074304 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-05 00:09:47.891383 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:47.891501 | orchestrator | 2026-01-05 00:09:47.891522 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-05 00:09:57.788987 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:57.789072 | orchestrator | 2026-01-05 00:09:57.789089 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-05 00:09:57.843816 | orchestrator | ok: [testbed-manager] 2026-01-05 00:09:57.843890 | orchestrator | 2026-01-05 00:09:57.843900 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-05 00:09:58.694831 | orchestrator | ok: [testbed-manager] 2026-01-05 00:09:58.694924 | orchestrator | 2026-01-05 00:09:58.694940 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-05 00:09:59.437629 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:59.437728 | orchestrator | 2026-01-05 00:09:59.437773 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-05 00:10:05.990473 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:05.990593 | orchestrator | 2026-01-05 00:10:05.990645 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-05 00:10:12.454078 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:12.455008 | orchestrator | 2026-01-05 00:10:12.455042 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-05 00:10:15.189763 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:15.189889 | orchestrator | 2026-01-05 00:10:15.189907 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-05 00:10:17.110173 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:17.110237 | orchestrator | 2026-01-05 00:10:17.110245 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-05 00:10:18.233997 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-05 00:10:18.234064 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-05 00:10:18.234070 | orchestrator | 2026-01-05 00:10:18.234076 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-05 00:10:18.284915 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-05 00:10:18.285005 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-05 00:10:18.285019 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-05 00:10:18.285031 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-05 00:10:22.953220 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-05 00:10:22.953306 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-05 00:10:22.953318 | orchestrator | 2026-01-05 00:10:22.953329 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-05 00:10:23.545614 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:23.545664 | orchestrator | 2026-01-05 00:10:23.545671 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-05 00:10:44.738580 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-05 00:10:44.739638 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-05 00:10:44.739668 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-05 00:10:44.739675 | orchestrator | 2026-01-05 00:10:44.739684 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-05 00:10:47.201728 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-05 00:10:47.202477 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-05 00:10:47.202487 | orchestrator | 2026-01-05 00:10:47.202496 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-05 00:10:47.202504 | orchestrator | 2026-01-05 00:10:47.202511 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:10:48.676409 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:48.676456 | orchestrator | 2026-01-05 00:10:48.676464 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-05 00:10:48.718692 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:48.718753 | orchestrator | 2026-01-05 00:10:48.718761 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-05 00:10:48.787420 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:48.787475 | orchestrator | 2026-01-05 00:10:48.787482 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-05 00:10:49.571824 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:49.572168 | orchestrator | 2026-01-05 00:10:49.572193 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-05 00:10:50.325266 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:50.325397 | orchestrator | 2026-01-05 00:10:50.325414 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-05 00:10:51.756627 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-05 00:10:51.756721 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-05 00:10:51.756742 | orchestrator | 2026-01-05 00:10:51.756786 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-05 00:10:53.208334 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:53.208425 | orchestrator | 2026-01-05 00:10:53.208434 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-05 00:10:55.135323 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:10:55.136530 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-05 00:10:55.136553 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:10:55.136560 | orchestrator | 2026-01-05 00:10:55.136570 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-05 00:10:55.198151 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:10:55.198256 | orchestrator | 2026-01-05 00:10:55.198274 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-05 00:10:55.276043 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:10:55.276118 | orchestrator | 2026-01-05 00:10:55.276125 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-05 00:10:55.897882 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:55.897948 | orchestrator | 2026-01-05 00:10:55.897956 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-05 00:10:55.969953 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:10:55.970107 | orchestrator | 2026-01-05 00:10:55.970120 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-05 00:10:56.864178 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:10:56.864246 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:56.864260 | orchestrator | 2026-01-05 00:10:56.864269 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-05 00:10:56.900403 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:10:56.900461 | orchestrator | 2026-01-05 00:10:56.900470 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-05 00:10:56.943580 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:10:56.943647 | orchestrator | 2026-01-05 00:10:56.943657 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-05 00:10:56.985462 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:10:56.985577 | orchestrator | 2026-01-05 00:10:56.985597 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-05 00:10:57.057130 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:10:57.057204 | orchestrator | 2026-01-05 00:10:57.057212 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-05 00:10:57.803485 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:57.803581 | orchestrator | 2026-01-05 00:10:57.803598 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-05 00:10:57.803611 | orchestrator | 2026-01-05 00:10:57.803622 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:10:59.267547 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:59.267637 | orchestrator | 2026-01-05 00:10:59.267651 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-05 00:11:00.240562 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:00.241665 | orchestrator | 2026-01-05 00:11:00.241692 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:11:00.241709 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-05 00:11:00.241722 | orchestrator | 2026-01-05 00:11:00.754451 | orchestrator | ok: Runtime: 0:05:58.490122 2026-01-05 00:11:00.772777 | 2026-01-05 00:11:00.772945 | TASK [Point out that the log in on the manager is now possible] 2026-01-05 00:11:00.820978 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-05 00:11:00.831024 | 2026-01-05 00:11:00.831184 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-05 00:11:00.866051 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-05 00:11:00.873700 | 2026-01-05 00:11:00.873828 | TASK [Run manager part 1 + 2] 2026-01-05 00:11:01.759605 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:11:01.822186 | orchestrator | 2026-01-05 00:11:01.822256 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-05 00:11:01.822268 | orchestrator | 2026-01-05 00:11:01.822287 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:11:04.472750 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:04.472801 | orchestrator | 2026-01-05 00:11:04.472823 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-05 00:11:04.518952 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:04.519005 | orchestrator | 2026-01-05 00:11:04.519013 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-05 00:11:04.574911 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:04.574967 | orchestrator | 2026-01-05 00:11:04.574976 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:11:04.613605 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:04.613655 | orchestrator | 2026-01-05 00:11:04.613662 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:11:04.689394 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:04.689450 | orchestrator | 2026-01-05 00:11:04.689460 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:11:04.748146 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:04.748203 | orchestrator | 2026-01-05 00:11:04.748214 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:11:04.810320 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-05 00:11:04.810372 | orchestrator | 2026-01-05 00:11:04.810379 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:11:05.550917 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:05.551251 | orchestrator | 2026-01-05 00:11:05.551269 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:11:05.605372 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:05.605414 | orchestrator | 2026-01-05 00:11:05.605421 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:11:06.958471 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:06.958524 | orchestrator | 2026-01-05 00:11:06.958536 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:11:07.556230 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:07.556275 | orchestrator | 2026-01-05 00:11:07.556282 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:11:08.669234 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:08.669269 | orchestrator | 2026-01-05 00:11:08.669276 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:11:25.520089 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:25.520216 | orchestrator | 2026-01-05 00:11:25.520232 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-05 00:11:26.244034 | orchestrator | ok: [testbed-manager] 2026-01-05 00:11:26.244103 | orchestrator | 2026-01-05 00:11:26.244114 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-05 00:11:26.300051 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:26.300193 | orchestrator | 2026-01-05 00:11:26.300213 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-05 00:11:27.287247 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:27.287347 | orchestrator | 2026-01-05 00:11:27.287362 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-05 00:11:28.303360 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:28.303467 | orchestrator | 2026-01-05 00:11:28.303495 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-05 00:11:28.906342 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:28.906399 | orchestrator | 2026-01-05 00:11:28.906406 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-05 00:11:28.951849 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-05 00:11:28.951970 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-05 00:11:28.951988 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-05 00:11:28.952191 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-05 00:11:31.937643 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:31.937693 | orchestrator | 2026-01-05 00:11:31.937701 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-05 00:11:41.438634 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-05 00:11:41.438731 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-05 00:11:41.438750 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-05 00:11:41.438762 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-05 00:11:41.438783 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-05 00:11:41.438795 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-05 00:11:41.438806 | orchestrator | 2026-01-05 00:11:41.438820 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-05 00:11:42.525560 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:42.525668 | orchestrator | 2026-01-05 00:11:42.525703 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-05 00:11:42.565725 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:42.565823 | orchestrator | 2026-01-05 00:11:42.565840 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-05 00:11:45.824460 | orchestrator | changed: [testbed-manager] 2026-01-05 00:11:45.824548 | orchestrator | 2026-01-05 00:11:45.824563 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-05 00:11:45.869794 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:11:45.869889 | orchestrator | 2026-01-05 00:11:45.869907 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-05 00:13:33.033421 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:33.033525 | orchestrator | 2026-01-05 00:13:33.033547 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:13:34.261406 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:34.263123 | orchestrator | 2026-01-05 00:13:34.263155 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:13:34.263177 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-05 00:13:34.263196 | orchestrator | 2026-01-05 00:13:34.518817 | orchestrator | ok: Runtime: 0:02:33.169342 2026-01-05 00:13:34.539430 | 2026-01-05 00:13:34.539595 | TASK [Reboot manager] 2026-01-05 00:13:36.081097 | orchestrator | ok: Runtime: 0:00:00.979811 2026-01-05 00:13:36.099548 | 2026-01-05 00:13:36.099842 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-05 00:13:53.552363 | orchestrator | ok 2026-01-05 00:13:53.567592 | 2026-01-05 00:13:53.567998 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-05 00:14:53.617042 | orchestrator | ok 2026-01-05 00:14:53.628089 | 2026-01-05 00:14:53.628268 | TASK [Deploy manager + bootstrap nodes] 2026-01-05 00:14:56.511162 | orchestrator | 2026-01-05 00:14:56.511263 | orchestrator | # DEPLOY MANAGER 2026-01-05 00:14:56.511271 | orchestrator | 2026-01-05 00:14:56.511276 | orchestrator | + set -e 2026-01-05 00:14:56.511280 | orchestrator | + echo 2026-01-05 00:14:56.511286 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-05 00:14:56.511293 | orchestrator | + echo 2026-01-05 00:14:56.511313 | orchestrator | + cat /opt/manager-vars.sh 2026-01-05 00:14:56.515101 | orchestrator | export NUMBER_OF_NODES=6 2026-01-05 00:14:56.515138 | orchestrator | 2026-01-05 00:14:56.515143 | orchestrator | export CEPH_VERSION=reef 2026-01-05 00:14:56.515149 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-05 00:14:56.515156 | orchestrator | export MANAGER_VERSION=9.5.0 2026-01-05 00:14:56.515170 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-05 00:14:56.515174 | orchestrator | 2026-01-05 00:14:56.515182 | orchestrator | export ARA=false 2026-01-05 00:14:56.515186 | orchestrator | export DEPLOY_MODE=manager 2026-01-05 00:14:56.515193 | orchestrator | export TEMPEST=true 2026-01-05 00:14:56.515197 | orchestrator | export IS_ZUUL=true 2026-01-05 00:14:56.515201 | orchestrator | 2026-01-05 00:14:56.515208 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 00:14:56.515213 | orchestrator | export EXTERNAL_API=false 2026-01-05 00:14:56.515216 | orchestrator | 2026-01-05 00:14:56.515220 | orchestrator | export IMAGE_USER=ubuntu 2026-01-05 00:14:56.515227 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-05 00:14:56.515231 | orchestrator | 2026-01-05 00:14:56.515234 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-05 00:14:56.515575 | orchestrator | 2026-01-05 00:14:56.515584 | orchestrator | + echo 2026-01-05 00:14:56.515589 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:14:56.516430 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:14:56.516447 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:14:56.516451 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:14:56.516456 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:14:56.516683 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:14:56.516690 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:14:56.516694 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:14:56.516697 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:14:56.516701 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:14:56.516764 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:14:56.516771 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:14:56.516775 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 00:14:56.516778 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 00:14:56.516782 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:14:56.516795 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:14:56.516799 | orchestrator | ++ export ARA=false 2026-01-05 00:14:56.516803 | orchestrator | ++ ARA=false 2026-01-05 00:14:56.516822 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:14:56.516826 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:14:56.516831 | orchestrator | ++ export TEMPEST=true 2026-01-05 00:14:56.516835 | orchestrator | ++ TEMPEST=true 2026-01-05 00:14:56.516838 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:14:56.516842 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:14:56.516848 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 00:14:56.516852 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 00:14:56.516856 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:14:56.516860 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:14:56.516863 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:14:56.516867 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:14:56.516871 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:14:56.516874 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:14:56.516879 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:14:56.516883 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:14:56.516888 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-05 00:14:56.577971 | orchestrator | + docker version 2026-01-05 00:14:56.883338 | orchestrator | Client: Docker Engine - Community 2026-01-05 00:14:56.883450 | orchestrator | Version: 27.5.1 2026-01-05 00:14:56.883466 | orchestrator | API version: 1.47 2026-01-05 00:14:56.883481 | orchestrator | Go version: go1.22.11 2026-01-05 00:14:56.883492 | orchestrator | Git commit: 9f9e405 2026-01-05 00:14:56.883503 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-05 00:14:56.883515 | orchestrator | OS/Arch: linux/amd64 2026-01-05 00:14:56.883527 | orchestrator | Context: default 2026-01-05 00:14:56.883538 | orchestrator | 2026-01-05 00:14:56.883549 | orchestrator | Server: Docker Engine - Community 2026-01-05 00:14:56.883560 | orchestrator | Engine: 2026-01-05 00:14:56.883571 | orchestrator | Version: 27.5.1 2026-01-05 00:14:56.883582 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-05 00:14:56.883629 | orchestrator | Go version: go1.22.11 2026-01-05 00:14:56.883641 | orchestrator | Git commit: 4c9b3b0 2026-01-05 00:14:56.883651 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-05 00:14:56.883662 | orchestrator | OS/Arch: linux/amd64 2026-01-05 00:14:56.883672 | orchestrator | Experimental: false 2026-01-05 00:14:56.883683 | orchestrator | containerd: 2026-01-05 00:14:56.883694 | orchestrator | Version: v2.2.1 2026-01-05 00:14:56.883705 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-05 00:14:56.883716 | orchestrator | runc: 2026-01-05 00:14:56.883726 | orchestrator | Version: 1.3.4 2026-01-05 00:14:56.883737 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-05 00:14:56.883748 | orchestrator | docker-init: 2026-01-05 00:14:56.883758 | orchestrator | Version: 0.19.0 2026-01-05 00:14:56.883769 | orchestrator | GitCommit: de40ad0 2026-01-05 00:14:56.886109 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-05 00:14:56.896940 | orchestrator | + set -e 2026-01-05 00:14:56.897031 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:14:56.897046 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:14:56.897059 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:14:56.897070 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:14:56.897081 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:14:56.897092 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:14:56.897103 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:14:56.897114 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 00:14:56.897125 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 00:14:56.897135 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:14:56.897146 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:14:56.897157 | orchestrator | ++ export ARA=false 2026-01-05 00:14:56.897168 | orchestrator | ++ ARA=false 2026-01-05 00:14:56.897178 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:14:56.897189 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:14:56.897200 | orchestrator | ++ export TEMPEST=true 2026-01-05 00:14:56.897210 | orchestrator | ++ TEMPEST=true 2026-01-05 00:14:56.897221 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:14:56.897231 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:14:56.897242 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 00:14:56.897253 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 00:14:56.897264 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:14:56.897286 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:14:56.897297 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:14:56.897308 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:14:56.897319 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:14:56.897329 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:14:56.897340 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:14:56.897351 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:14:56.897362 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:14:56.897372 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:14:56.897383 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:14:56.897393 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:14:56.897408 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:14:56.897423 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-05 00:14:56.897435 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-01-05 00:14:56.905125 | orchestrator | + set -e 2026-01-05 00:14:56.905205 | orchestrator | + VERSION=9.5.0 2026-01-05 00:14:56.905225 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:14:56.909903 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-05 00:14:56.909953 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:14:56.912057 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:14:56.917387 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-05 00:14:56.926465 | orchestrator | /opt/configuration ~ 2026-01-05 00:14:56.926521 | orchestrator | + set -e 2026-01-05 00:14:56.926533 | orchestrator | + pushd /opt/configuration 2026-01-05 00:14:56.926544 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:14:56.930170 | orchestrator | + source /opt/venv/bin/activate 2026-01-05 00:14:56.931636 | orchestrator | ++ deactivate nondestructive 2026-01-05 00:14:56.931679 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:14:56.931693 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:14:56.931738 | orchestrator | ++ hash -r 2026-01-05 00:14:56.931749 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:14:56.931760 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-05 00:14:56.931771 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-05 00:14:56.931782 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-05 00:14:56.932338 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-05 00:14:56.932355 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-05 00:14:56.932366 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-05 00:14:56.932376 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-05 00:14:56.932389 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:14:56.932457 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:14:56.932557 | orchestrator | ++ export PATH 2026-01-05 00:14:56.932746 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:14:56.932762 | orchestrator | ++ '[' -z '' ']' 2026-01-05 00:14:56.932902 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-05 00:14:56.932919 | orchestrator | ++ PS1='(venv) ' 2026-01-05 00:14:56.932929 | orchestrator | ++ export PS1 2026-01-05 00:14:56.932982 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-05 00:14:56.932996 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-05 00:14:56.933007 | orchestrator | ++ hash -r 2026-01-05 00:14:56.933056 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-05 00:14:58.337472 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-05 00:14:58.338334 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-05 00:14:58.339890 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-05 00:14:58.341293 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-05 00:14:58.342713 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2026-01-05 00:14:58.352734 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-05 00:14:58.354231 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-05 00:14:58.355331 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-05 00:14:58.356769 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-05 00:14:58.392044 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-05 00:14:58.393443 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-05 00:14:58.395450 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.2) 2026-01-05 00:14:58.396605 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-05 00:14:58.400897 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-05 00:14:58.633210 | orchestrator | ++ which gilt 2026-01-05 00:14:58.638569 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-05 00:14:58.638633 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-05 00:14:58.892716 | orchestrator | osism.cfg-generics: 2026-01-05 00:14:59.056106 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-05 00:14:59.056195 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-05 00:14:59.057210 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-05 00:14:59.057223 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-05 00:14:59.763088 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-05 00:14:59.774884 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-05 00:15:00.098378 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-05 00:15:00.146327 | orchestrator | ~ 2026-01-05 00:15:00.146456 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:15:00.146480 | orchestrator | + deactivate 2026-01-05 00:15:00.146501 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-05 00:15:00.146520 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:15:00.146537 | orchestrator | + export PATH 2026-01-05 00:15:00.146556 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-05 00:15:00.146569 | orchestrator | + '[' -n '' ']' 2026-01-05 00:15:00.146582 | orchestrator | + hash -r 2026-01-05 00:15:00.146592 | orchestrator | + '[' -n '' ']' 2026-01-05 00:15:00.146602 | orchestrator | + unset VIRTUAL_ENV 2026-01-05 00:15:00.146612 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-05 00:15:00.146622 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-05 00:15:00.146631 | orchestrator | + unset -f deactivate 2026-01-05 00:15:00.146642 | orchestrator | + popd 2026-01-05 00:15:00.147062 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-05 00:15:00.147082 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-05 00:15:00.147810 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-05 00:15:00.198165 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 00:15:00.198261 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-05 00:15:00.199172 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-05 00:15:00.259283 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:15:00.260228 | orchestrator | ++ semver 2024.2 2025.1 2026-01-05 00:15:00.323409 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:15:00.323481 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-05 00:15:00.418280 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:15:00.418357 | orchestrator | + source /opt/venv/bin/activate 2026-01-05 00:15:00.418364 | orchestrator | ++ deactivate nondestructive 2026-01-05 00:15:00.418378 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:15:00.418384 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:15:00.418389 | orchestrator | ++ hash -r 2026-01-05 00:15:00.418394 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:15:00.418400 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-05 00:15:00.418404 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-05 00:15:00.418410 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-05 00:15:00.418553 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-05 00:15:00.418563 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-05 00:15:00.418607 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-05 00:15:00.418614 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-05 00:15:00.418739 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:15:00.419023 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:15:00.419042 | orchestrator | ++ export PATH 2026-01-05 00:15:00.419053 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:15:00.419065 | orchestrator | ++ '[' -z '' ']' 2026-01-05 00:15:00.419074 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-05 00:15:00.419103 | orchestrator | ++ PS1='(venv) ' 2026-01-05 00:15:00.419113 | orchestrator | ++ export PS1 2026-01-05 00:15:00.419212 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-05 00:15:00.419225 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-05 00:15:00.419234 | orchestrator | ++ hash -r 2026-01-05 00:15:00.419514 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-05 00:15:01.781272 | orchestrator | 2026-01-05 00:15:01.781357 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-05 00:15:01.781383 | orchestrator | 2026-01-05 00:15:01.781396 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:15:02.420953 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:02.421047 | orchestrator | 2026-01-05 00:15:02.421064 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-05 00:15:03.493673 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:03.493733 | orchestrator | 2026-01-05 00:15:03.493742 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-05 00:15:03.493767 | orchestrator | 2026-01-05 00:15:03.493774 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:15:05.988514 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:05.988613 | orchestrator | 2026-01-05 00:15:05.988630 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-05 00:15:06.037326 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:06.037410 | orchestrator | 2026-01-05 00:15:06.037425 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-05 00:15:06.515223 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:06.515308 | orchestrator | 2026-01-05 00:15:06.515323 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-05 00:15:06.569284 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:06.569361 | orchestrator | 2026-01-05 00:15:06.569375 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-05 00:15:06.966089 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:06.966186 | orchestrator | 2026-01-05 00:15:06.966203 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-05 00:15:07.019182 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:07.019270 | orchestrator | 2026-01-05 00:15:07.019286 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-05 00:15:07.377230 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:07.377346 | orchestrator | 2026-01-05 00:15:07.377363 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-05 00:15:07.519987 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:07.520078 | orchestrator | 2026-01-05 00:15:07.520094 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-05 00:15:07.520106 | orchestrator | 2026-01-05 00:15:07.520118 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:15:09.321748 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:09.321879 | orchestrator | 2026-01-05 00:15:09.321899 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-05 00:15:09.427251 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-05 00:15:09.427344 | orchestrator | 2026-01-05 00:15:09.427360 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-05 00:15:09.483236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-05 00:15:09.483321 | orchestrator | 2026-01-05 00:15:09.483336 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-05 00:15:10.630993 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-05 00:15:10.631086 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-05 00:15:10.631104 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-05 00:15:10.631116 | orchestrator | 2026-01-05 00:15:10.631128 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-05 00:15:12.572489 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-05 00:15:12.572602 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-05 00:15:12.572616 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-05 00:15:12.572629 | orchestrator | 2026-01-05 00:15:12.572641 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-05 00:15:13.178063 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:15:13.178150 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:13.178159 | orchestrator | 2026-01-05 00:15:13.178166 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-05 00:15:13.763999 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:15:13.764110 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:13.764127 | orchestrator | 2026-01-05 00:15:13.764139 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-05 00:15:13.818840 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:13.818958 | orchestrator | 2026-01-05 00:15:13.818972 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-05 00:15:14.161792 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:14.161930 | orchestrator | 2026-01-05 00:15:14.161950 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-05 00:15:14.240837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-05 00:15:14.240978 | orchestrator | 2026-01-05 00:15:14.240994 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-05 00:15:15.240987 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:15.241103 | orchestrator | 2026-01-05 00:15:15.241120 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-05 00:15:16.037206 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:16.037319 | orchestrator | 2026-01-05 00:15:16.037337 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-05 00:15:31.008790 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:31.008939 | orchestrator | 2026-01-05 00:15:31.009039 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-05 00:15:31.065793 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:31.065891 | orchestrator | 2026-01-05 00:15:31.065937 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-05 00:15:31.065951 | orchestrator | 2026-01-05 00:15:31.065963 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:15:33.012231 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:33.012337 | orchestrator | 2026-01-05 00:15:33.012354 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-05 00:15:33.144985 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-05 00:15:33.145089 | orchestrator | 2026-01-05 00:15:33.145104 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-05 00:15:33.198873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:15:33.198991 | orchestrator | 2026-01-05 00:15:33.199007 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-05 00:15:36.019093 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:36.019201 | orchestrator | 2026-01-05 00:15:36.019217 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-05 00:15:36.080745 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:36.080851 | orchestrator | 2026-01-05 00:15:36.080869 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-05 00:15:36.206903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-05 00:15:36.207015 | orchestrator | 2026-01-05 00:15:36.207032 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-05 00:15:39.229720 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-05 00:15:39.230895 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-05 00:15:39.231017 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-05 00:15:39.231034 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-05 00:15:39.231046 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-05 00:15:39.231058 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-05 00:15:39.231069 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-05 00:15:39.231080 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-05 00:15:39.231092 | orchestrator | 2026-01-05 00:15:39.231106 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-05 00:15:39.910535 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:39.910638 | orchestrator | 2026-01-05 00:15:39.910655 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-05 00:15:40.597326 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:40.597436 | orchestrator | 2026-01-05 00:15:40.597453 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-05 00:15:40.678328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-05 00:15:40.678478 | orchestrator | 2026-01-05 00:15:40.678497 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-05 00:15:41.985036 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-05 00:15:41.985138 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-05 00:15:41.985153 | orchestrator | 2026-01-05 00:15:41.985167 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-05 00:15:42.642172 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:42.642268 | orchestrator | 2026-01-05 00:15:42.642285 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-05 00:15:42.700579 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:42.700656 | orchestrator | 2026-01-05 00:15:42.700671 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-05 00:15:42.784048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-05 00:15:42.784159 | orchestrator | 2026-01-05 00:15:42.784175 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-05 00:15:43.424228 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:43.424334 | orchestrator | 2026-01-05 00:15:43.424352 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-05 00:15:43.488680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-05 00:15:43.488777 | orchestrator | 2026-01-05 00:15:43.488791 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-05 00:15:44.885243 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:15:44.885356 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:15:44.885371 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:44.885384 | orchestrator | 2026-01-05 00:15:44.885396 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-05 00:15:45.540846 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:45.541086 | orchestrator | 2026-01-05 00:15:45.541123 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-05 00:15:45.591474 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:45.591574 | orchestrator | 2026-01-05 00:15:45.591617 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-05 00:15:45.692589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-05 00:15:45.692700 | orchestrator | 2026-01-05 00:15:45.692713 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-05 00:15:46.218145 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:46.218240 | orchestrator | 2026-01-05 00:15:46.218254 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-05 00:15:46.652094 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:46.652208 | orchestrator | 2026-01-05 00:15:46.652223 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-05 00:15:47.950568 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-05 00:15:47.950672 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-05 00:15:47.950686 | orchestrator | 2026-01-05 00:15:47.950698 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-05 00:15:48.606775 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:48.606897 | orchestrator | 2026-01-05 00:15:48.606920 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-05 00:15:49.029432 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:49.029543 | orchestrator | 2026-01-05 00:15:49.029559 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-05 00:15:49.402694 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:49.402793 | orchestrator | 2026-01-05 00:15:49.402808 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-05 00:15:49.451649 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:49.451773 | orchestrator | 2026-01-05 00:15:49.451787 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-05 00:15:49.523785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-05 00:15:49.523855 | orchestrator | 2026-01-05 00:15:49.523860 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-05 00:15:49.572163 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:49.572229 | orchestrator | 2026-01-05 00:15:49.572236 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-05 00:15:51.690739 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-05 00:15:51.690822 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-05 00:15:51.690836 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-05 00:15:51.690848 | orchestrator | 2026-01-05 00:15:51.690859 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-05 00:15:52.424419 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:52.424497 | orchestrator | 2026-01-05 00:15:52.424504 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-05 00:15:53.187349 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:53.187432 | orchestrator | 2026-01-05 00:15:53.187439 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-05 00:15:53.944598 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:53.944712 | orchestrator | 2026-01-05 00:15:53.944729 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-05 00:15:54.030140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-05 00:15:54.030215 | orchestrator | 2026-01-05 00:15:54.030221 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-05 00:15:54.086497 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:54.086595 | orchestrator | 2026-01-05 00:15:54.086608 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-05 00:15:54.790333 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-05 00:15:54.790441 | orchestrator | 2026-01-05 00:15:54.790455 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-05 00:15:54.880689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-05 00:15:54.880789 | orchestrator | 2026-01-05 00:15:54.880805 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-05 00:15:55.626146 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:55.626266 | orchestrator | 2026-01-05 00:15:55.626285 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-05 00:15:56.247820 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:56.247920 | orchestrator | 2026-01-05 00:15:56.247934 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-05 00:15:56.307565 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:56.307660 | orchestrator | 2026-01-05 00:15:56.307672 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-05 00:15:56.373658 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:56.373756 | orchestrator | 2026-01-05 00:15:56.373772 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-05 00:15:57.214939 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:57.215182 | orchestrator | 2026-01-05 00:15:57.215204 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-05 00:17:12.902949 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:12.903040 | orchestrator | 2026-01-05 00:17:12.903051 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-05 00:17:13.982737 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:13.982842 | orchestrator | 2026-01-05 00:17:13.982863 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-05 00:17:14.041675 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:14.041827 | orchestrator | 2026-01-05 00:17:14.041854 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-05 00:17:18.733518 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:18.733625 | orchestrator | 2026-01-05 00:17:18.733641 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-05 00:17:18.838337 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:18.838439 | orchestrator | 2026-01-05 00:17:18.838456 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 00:17:18.838469 | orchestrator | 2026-01-05 00:17:18.838481 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-05 00:17:18.900487 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:18.900570 | orchestrator | 2026-01-05 00:17:18.900583 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-05 00:18:18.945121 | orchestrator | Pausing for 60 seconds 2026-01-05 00:18:18.945247 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:18.945323 | orchestrator | 2026-01-05 00:18:18.945339 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-05 00:18:22.072706 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:22.072796 | orchestrator | 2026-01-05 00:18:22.072806 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-05 00:19:24.086076 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-05 00:19:24.086224 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-05 00:19:24.086285 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-05 00:19:24.086299 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:24.086312 | orchestrator | 2026-01-05 00:19:24.086324 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-05 00:19:35.552191 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:35.552342 | orchestrator | 2026-01-05 00:19:35.552356 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-05 00:19:35.654316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-05 00:19:35.654426 | orchestrator | 2026-01-05 00:19:35.654443 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 00:19:35.654455 | orchestrator | 2026-01-05 00:19:35.654472 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-05 00:19:35.714659 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:35.714755 | orchestrator | 2026-01-05 00:19:35.714769 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-05 00:19:35.813669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-05 00:19:35.813794 | orchestrator | 2026-01-05 00:19:35.813810 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-05 00:19:36.632959 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:36.633071 | orchestrator | 2026-01-05 00:19:36.633086 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-05 00:19:40.228147 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:40.228363 | orchestrator | 2026-01-05 00:19:40.228388 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-05 00:19:40.290322 | orchestrator | ok: [testbed-manager] => { 2026-01-05 00:19:40.290440 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-05 00:19:40.290466 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-05 00:19:40.290485 | orchestrator | "Checking running containers against expected versions...", 2026-01-05 00:19:40.290505 | orchestrator | "", 2026-01-05 00:19:40.290524 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-05 00:19:40.290541 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-05 00:19:40.290562 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.290581 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-05 00:19:40.290599 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.290652 | orchestrator | "", 2026-01-05 00:19:40.290671 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-05 00:19:40.290683 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-05 00:19:40.290694 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.290705 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-05 00:19:40.290715 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.290726 | orchestrator | "", 2026-01-05 00:19:40.290737 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-05 00:19:40.290748 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-05 00:19:40.290759 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.290770 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-05 00:19:40.290781 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.290792 | orchestrator | "", 2026-01-05 00:19:40.290805 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-05 00:19:40.290818 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-05 00:19:40.290830 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.290846 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-05 00:19:40.290858 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.290870 | orchestrator | "", 2026-01-05 00:19:40.290883 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-05 00:19:40.290897 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-05 00:19:40.290910 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.290922 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-05 00:19:40.290935 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.290948 | orchestrator | "", 2026-01-05 00:19:40.290961 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-05 00:19:40.290973 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.290986 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.290998 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291032 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291056 | orchestrator | "", 2026-01-05 00:19:40.291070 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-05 00:19:40.291084 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 00:19:40.291096 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.291108 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 00:19:40.291120 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291133 | orchestrator | "", 2026-01-05 00:19:40.291146 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-05 00:19:40.291159 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 00:19:40.291169 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.291180 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 00:19:40.291191 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291201 | orchestrator | "", 2026-01-05 00:19:40.291212 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-05 00:19:40.291222 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-05 00:19:40.291233 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.291244 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-05 00:19:40.291298 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291313 | orchestrator | "", 2026-01-05 00:19:40.291324 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-05 00:19:40.291334 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 00:19:40.291345 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.291356 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 00:19:40.291377 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291388 | orchestrator | "", 2026-01-05 00:19:40.291399 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-05 00:19:40.291410 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291421 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.291432 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291442 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291453 | orchestrator | "", 2026-01-05 00:19:40.291464 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-05 00:19:40.291476 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291487 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.291498 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291509 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291520 | orchestrator | "", 2026-01-05 00:19:40.291531 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-05 00:19:40.291542 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291553 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.291563 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291574 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291585 | orchestrator | "", 2026-01-05 00:19:40.291596 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-05 00:19:40.291614 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291632 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.291651 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291694 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291713 | orchestrator | "", 2026-01-05 00:19:40.291741 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-05 00:19:40.291758 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291774 | orchestrator | " Enabled: true", 2026-01-05 00:19:40.291791 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:19:40.291808 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:19:40.291824 | orchestrator | "", 2026-01-05 00:19:40.291840 | orchestrator | "=== Summary ===", 2026-01-05 00:19:40.291857 | orchestrator | "Errors (version mismatches): 0", 2026-01-05 00:19:40.291874 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-05 00:19:40.291892 | orchestrator | "", 2026-01-05 00:19:40.291910 | orchestrator | "✅ All running containers match expected versions!" 2026-01-05 00:19:40.291928 | orchestrator | ] 2026-01-05 00:19:40.291944 | orchestrator | } 2026-01-05 00:19:40.291963 | orchestrator | 2026-01-05 00:19:40.291982 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-05 00:19:40.351175 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:40.351318 | orchestrator | 2026-01-05 00:19:40.351343 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:19:40.351365 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-05 00:19:40.351384 | orchestrator | 2026-01-05 00:19:40.492603 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:19:40.492710 | orchestrator | + deactivate 2026-01-05 00:19:40.492725 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-05 00:19:40.492740 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:19:40.492751 | orchestrator | + export PATH 2026-01-05 00:19:40.492762 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-05 00:19:40.492773 | orchestrator | + '[' -n '' ']' 2026-01-05 00:19:40.492784 | orchestrator | + hash -r 2026-01-05 00:19:40.492795 | orchestrator | + '[' -n '' ']' 2026-01-05 00:19:40.492805 | orchestrator | + unset VIRTUAL_ENV 2026-01-05 00:19:40.492816 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-05 00:19:40.492827 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-05 00:19:40.492837 | orchestrator | + unset -f deactivate 2026-01-05 00:19:40.492849 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-05 00:19:40.500780 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 00:19:40.500819 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-05 00:19:40.500831 | orchestrator | + local max_attempts=60 2026-01-05 00:19:40.500843 | orchestrator | + local name=ceph-ansible 2026-01-05 00:19:40.500854 | orchestrator | + local attempt_num=1 2026-01-05 00:19:40.501570 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:19:40.530522 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:19:40.530616 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-05 00:19:40.530630 | orchestrator | + local max_attempts=60 2026-01-05 00:19:40.530642 | orchestrator | + local name=kolla-ansible 2026-01-05 00:19:40.530652 | orchestrator | + local attempt_num=1 2026-01-05 00:19:40.530958 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-05 00:19:40.564675 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:19:40.564758 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-05 00:19:40.564771 | orchestrator | + local max_attempts=60 2026-01-05 00:19:40.564782 | orchestrator | + local name=osism-ansible 2026-01-05 00:19:40.564793 | orchestrator | + local attempt_num=1 2026-01-05 00:19:40.565185 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-05 00:19:40.599472 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:19:40.599577 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-05 00:19:40.599593 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-05 00:19:41.361808 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-05 00:19:41.579524 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-05 00:19:41.579605 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.579614 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.579621 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-05 00:19:41.579630 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-05 00:19:41.579650 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.579657 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.579663 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.579669 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.579676 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-05 00:19:41.579682 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.579688 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-05 00:19:41.579713 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.579720 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-05 00:19:41.579726 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-05 00:19:41.579732 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:19:41.587520 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-05 00:19:41.648541 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 00:19:41.648636 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-05 00:19:41.653926 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-05 00:19:54.039839 | orchestrator | 2026-01-05 00:19:54 | INFO  | Task 0335b31d-894e-4ea0-b8ef-d08ed669491a (resolvconf) was prepared for execution. 2026-01-05 00:19:54.039966 | orchestrator | 2026-01-05 00:19:54 | INFO  | It takes a moment until task 0335b31d-894e-4ea0-b8ef-d08ed669491a (resolvconf) has been started and output is visible here. 2026-01-05 00:20:08.126490 | orchestrator | 2026-01-05 00:20:08.126595 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-05 00:20:08.126612 | orchestrator | 2026-01-05 00:20:08.126624 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:20:08.126636 | orchestrator | Monday 05 January 2026 00:19:58 +0000 (0:00:00.133) 0:00:00.133 ******** 2026-01-05 00:20:08.126647 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:08.126659 | orchestrator | 2026-01-05 00:20:08.126670 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-05 00:20:08.126682 | orchestrator | Monday 05 January 2026 00:20:02 +0000 (0:00:03.801) 0:00:03.934 ******** 2026-01-05 00:20:08.126693 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:08.126704 | orchestrator | 2026-01-05 00:20:08.126715 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-05 00:20:08.126726 | orchestrator | Monday 05 January 2026 00:20:02 +0000 (0:00:00.074) 0:00:04.008 ******** 2026-01-05 00:20:08.126737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-05 00:20:08.126749 | orchestrator | 2026-01-05 00:20:08.126760 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-05 00:20:08.126771 | orchestrator | Monday 05 January 2026 00:20:02 +0000 (0:00:00.068) 0:00:04.077 ******** 2026-01-05 00:20:08.126800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:20:08.126811 | orchestrator | 2026-01-05 00:20:08.126822 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-05 00:20:08.126834 | orchestrator | Monday 05 January 2026 00:20:02 +0000 (0:00:00.073) 0:00:04.150 ******** 2026-01-05 00:20:08.126844 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:08.126855 | orchestrator | 2026-01-05 00:20:08.126866 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-05 00:20:08.126877 | orchestrator | Monday 05 January 2026 00:20:03 +0000 (0:00:01.074) 0:00:05.225 ******** 2026-01-05 00:20:08.126888 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:08.126899 | orchestrator | 2026-01-05 00:20:08.126910 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-05 00:20:08.126939 | orchestrator | Monday 05 January 2026 00:20:03 +0000 (0:00:00.040) 0:00:05.266 ******** 2026-01-05 00:20:08.126951 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:08.126961 | orchestrator | 2026-01-05 00:20:08.126972 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-05 00:20:08.126983 | orchestrator | Monday 05 January 2026 00:20:03 +0000 (0:00:00.469) 0:00:05.736 ******** 2026-01-05 00:20:08.126994 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:08.127005 | orchestrator | 2026-01-05 00:20:08.127016 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-05 00:20:08.127027 | orchestrator | Monday 05 January 2026 00:20:04 +0000 (0:00:00.085) 0:00:05.821 ******** 2026-01-05 00:20:08.127038 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:08.127049 | orchestrator | 2026-01-05 00:20:08.127060 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-05 00:20:08.127071 | orchestrator | Monday 05 January 2026 00:20:04 +0000 (0:00:00.465) 0:00:06.287 ******** 2026-01-05 00:20:08.127082 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:08.127093 | orchestrator | 2026-01-05 00:20:08.127104 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-05 00:20:08.127115 | orchestrator | Monday 05 January 2026 00:20:05 +0000 (0:00:01.009) 0:00:07.296 ******** 2026-01-05 00:20:08.127125 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:08.127136 | orchestrator | 2026-01-05 00:20:08.127147 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-05 00:20:08.127158 | orchestrator | Monday 05 January 2026 00:20:06 +0000 (0:00:01.058) 0:00:08.355 ******** 2026-01-05 00:20:08.127169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-05 00:20:08.127180 | orchestrator | 2026-01-05 00:20:08.127191 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-05 00:20:08.127201 | orchestrator | Monday 05 January 2026 00:20:06 +0000 (0:00:00.084) 0:00:08.440 ******** 2026-01-05 00:20:08.127212 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:08.127223 | orchestrator | 2026-01-05 00:20:08.127233 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:20:08.127245 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:20:08.127256 | orchestrator | 2026-01-05 00:20:08.127267 | orchestrator | 2026-01-05 00:20:08.127277 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:20:08.127288 | orchestrator | Monday 05 January 2026 00:20:07 +0000 (0:00:01.167) 0:00:09.608 ******** 2026-01-05 00:20:08.127369 | orchestrator | =============================================================================== 2026-01-05 00:20:08.127381 | orchestrator | Gathering Facts --------------------------------------------------------- 3.80s 2026-01-05 00:20:08.127392 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2026-01-05 00:20:08.127402 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.07s 2026-01-05 00:20:08.127413 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.06s 2026-01-05 00:20:08.127424 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.01s 2026-01-05 00:20:08.127435 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2026-01-05 00:20:08.127463 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.47s 2026-01-05 00:20:08.127475 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-01-05 00:20:08.127486 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-05 00:20:08.127497 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-05 00:20:08.127516 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-01-05 00:20:08.127527 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-01-05 00:20:08.127538 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.04s 2026-01-05 00:20:08.447266 | orchestrator | + osism apply sshconfig 2026-01-05 00:20:20.570278 | orchestrator | 2026-01-05 00:20:20 | INFO  | Task 28b81bc4-e3d4-42d5-bbf6-9681ae108821 (sshconfig) was prepared for execution. 2026-01-05 00:20:20.570389 | orchestrator | 2026-01-05 00:20:20 | INFO  | It takes a moment until task 28b81bc4-e3d4-42d5-bbf6-9681ae108821 (sshconfig) has been started and output is visible here. 2026-01-05 00:20:33.206940 | orchestrator | 2026-01-05 00:20:33.207073 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-05 00:20:33.207091 | orchestrator | 2026-01-05 00:20:33.207127 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-05 00:20:33.207140 | orchestrator | Monday 05 January 2026 00:20:24 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-01-05 00:20:33.207151 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:33.207163 | orchestrator | 2026-01-05 00:20:33.207174 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-05 00:20:33.207185 | orchestrator | Monday 05 January 2026 00:20:25 +0000 (0:00:00.636) 0:00:00.803 ******** 2026-01-05 00:20:33.207196 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:33.207208 | orchestrator | 2026-01-05 00:20:33.207219 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-05 00:20:33.207230 | orchestrator | Monday 05 January 2026 00:20:26 +0000 (0:00:00.547) 0:00:01.351 ******** 2026-01-05 00:20:33.207241 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:20:33.207252 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:20:33.207263 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:20:33.207273 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:20:33.207284 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:20:33.207294 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:20:33.207305 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:20:33.207315 | orchestrator | 2026-01-05 00:20:33.207376 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-05 00:20:33.207388 | orchestrator | Monday 05 January 2026 00:20:32 +0000 (0:00:06.062) 0:00:07.414 ******** 2026-01-05 00:20:33.207399 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:33.207409 | orchestrator | 2026-01-05 00:20:33.207420 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-05 00:20:33.207431 | orchestrator | Monday 05 January 2026 00:20:32 +0000 (0:00:00.075) 0:00:07.489 ******** 2026-01-05 00:20:33.207442 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:33.207452 | orchestrator | 2026-01-05 00:20:33.207463 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:20:33.207477 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:20:33.207490 | orchestrator | 2026-01-05 00:20:33.207503 | orchestrator | 2026-01-05 00:20:33.207515 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:20:33.207528 | orchestrator | Monday 05 January 2026 00:20:32 +0000 (0:00:00.597) 0:00:08.086 ******** 2026-01-05 00:20:33.207540 | orchestrator | =============================================================================== 2026-01-05 00:20:33.207553 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.06s 2026-01-05 00:20:33.207566 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.64s 2026-01-05 00:20:33.207578 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-01-05 00:20:33.207621 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-01-05 00:20:33.207634 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-01-05 00:20:33.550315 | orchestrator | + osism apply known-hosts 2026-01-05 00:20:45.657933 | orchestrator | 2026-01-05 00:20:45 | INFO  | Task eae8f628-72a6-4bd3-8269-f6e46cc67ca8 (known-hosts) was prepared for execution. 2026-01-05 00:20:45.658112 | orchestrator | 2026-01-05 00:20:45 | INFO  | It takes a moment until task eae8f628-72a6-4bd3-8269-f6e46cc67ca8 (known-hosts) has been started and output is visible here. 2026-01-05 00:21:03.272750 | orchestrator | 2026-01-05 00:21:03.272868 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-05 00:21:03.272883 | orchestrator | 2026-01-05 00:21:03.272894 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-05 00:21:03.272905 | orchestrator | Monday 05 January 2026 00:20:49 +0000 (0:00:00.182) 0:00:00.182 ******** 2026-01-05 00:21:03.272916 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:21:03.272927 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:21:03.272937 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:21:03.272953 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:21:03.272971 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:21:03.272988 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:21:03.273005 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:21:03.273023 | orchestrator | 2026-01-05 00:21:03.273041 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-05 00:21:03.273062 | orchestrator | Monday 05 January 2026 00:20:56 +0000 (0:00:06.199) 0:00:06.381 ******** 2026-01-05 00:21:03.273080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-05 00:21:03.273101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-05 00:21:03.273121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-05 00:21:03.273138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-05 00:21:03.273157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-05 00:21:03.273190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-05 00:21:03.273211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-05 00:21:03.273229 | orchestrator | 2026-01-05 00:21:03.273245 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:03.273263 | orchestrator | Monday 05 January 2026 00:20:56 +0000 (0:00:00.168) 0:00:06.549 ******** 2026-01-05 00:21:03.273290 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkeOTyabUREaa5uSaHQyttQVvKi/F0Pu+CXcsqhLJEJlSNeIo/6lsbuS2z5HCDYKR61ldSEG2lkVWgZGWXbULxXTjlI0aHvm8JaMpQBq6utbO0sxpOGuBNuAbIveYjbvKGsrnV0pFTf1a0Dq1Rp5kjE3HbD3WZTRYXEMkZyvsvhTYtaLcPHLJEzjZor+hAZMil0uIwfpDI3t9bUCaK4LOQUuFmWj5hbQ+zhsfL7MmyO5be32vJSzUSIyfWsIvr0sDvTPvQr3hWvpdigq2UioQOw855XRtDxD7cu/3fgTyx14B+llGYwJ+qptZViF1E3bhTddBMssWAYXfv5BSI9HLALWtXpsgrI/K20X8CqqaC978dKdoeD/wDC5vFCRgRzWywOPmy8kdK6Wb1ZY1SR3Y0znwxy0+izEDpYkEpjvGRmZu4udwoVltIKUzCFgylNSnob/UyWf62OMMiyyOQ/8h0tpqUOGkFsydiYbVk4aI4AuB7FAPXI/quGZ5NB6EZZRU=) 2026-01-05 00:21:03.273347 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEvVDpBkB6AKU0bFM2Eyqid0+3ikaMXq90KUViE37UXDc5fyQ3aAjVJ9QMJojwispr5IEyBH3irlSnXNcieNeME=) 2026-01-05 00:21:03.273409 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIONNxqu/2RyFcjrNt4fLSE8Jon/cDnYP0NzA/C+qVUjj) 2026-01-05 00:21:03.273430 | orchestrator | 2026-01-05 00:21:03.273447 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:03.273464 | orchestrator | Monday 05 January 2026 00:20:57 +0000 (0:00:01.257) 0:00:07.807 ******** 2026-01-05 00:21:03.273482 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA/viRAPTWH8JxkhttpEJBU/ubwSNonODAYlJyRlvAzv) 2026-01-05 00:21:03.273540 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPhAUeJi6ZVSZuRK6nyEfD268ZnjFKenkA57h04gptxTAZidb5apLUQWZxkKxloN0AL/obk4pu7RL/sCZAaCCdcuFGi4WzlrOpwAdBmCSjf/Lfj7hMCX1nsFSSrIXdaC22FIK127GixhI/rl7fBhW3Mi5XNhTcDwrZ3nGUb+4fOmHvJNhazW97RHhE+/Tv+zG09wukHKy+6oAj0mGsM/dS3a9oe/C1/kgyOtcYZq05cvhOlBidE5aiFb8ItapxT0C1uIm6bQ8fEVSbvCjLE+weC7pvKHlVQIdR/rby8mkSeRfa0yArgD55iJHmlR5ePyT/VjDmMy3rYqyaOI8PJgy+AgJqg6EnP8Kf10qO8h8JG3T1OFcYC26vXYm80J132RBgBSvqa8ES+njjHRvH5F7mVB2zs0x4t3acw+u2xCJb9JCoW8FJxnGjlnAvZZEUh5ESthcaMxTWHRaRtM675dwxFocqrzZzF3vQ6t6x8MwNIot9xkAOPZgAxCUl0TUC7uM=) 2026-01-05 00:21:03.273555 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO0e7Ky/b7JUMgAIFIC4mPKd3jax12d5tPb2FhTQ07Y9omQ4L6GQzJFm4QLMj/Z117TlzpoE3cmB5BcWMiXckYk=) 2026-01-05 00:21:03.273567 | orchestrator | 2026-01-05 00:21:03.273579 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:03.273591 | orchestrator | Monday 05 January 2026 00:20:58 +0000 (0:00:01.167) 0:00:08.974 ******** 2026-01-05 00:21:03.273603 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDL3UFImxmVu5M/eRBupyZ28IVIVTdJORo+TB9U9XXZqq4I5QloDWdwelU492ckgdwVged0xrtLfYWdRJmPbMUFZpq2TwmeuriHngxOUXnZR9YxsPTtr3r/XAgKY3qHMbMrlXSLtEFoxPqFhYNePz7La+oaXHlgkB3+gcCR/IeQNS3ymKmMoD3SQhPrntYkJ4+UK1gtUZeH3/QPEVA8YTlcQpkhcXLE16OyjPYJGeNupYjJH0Zk1VRXj5m1gU1yVYwZ3qb76SErGpxebdrRtdyM3OdkAOZP3CPkgF1Bx8FiCtfVg8JbuAefXV/COG8dz/zWWaAcx2sukC1jXknsWQocYD0XRzfmyTxRhNYWPe/jE/M4S1qvw1ih+ZikuCuM/y8zH4HasJMi0nSAwsMItyFkKHmc8EWv1E699BGxUm7fUah/K/KA3t44IfPtSkabmW6ur5d+++d5M9Eem7MAFAgg7Ogp+Pt/AskZR7TOxHCgmf88fZmkqqQV8PvVF8Xn42M=) 2026-01-05 00:21:03.273616 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIJyyyUIUjzNoXlvxXIsWKE3GwmF3PF6+A9847htXlW3HPAbJW/F4D2XHkGiRJH+7JW7xMEc2tlCLrQIeaCbjZI=) 2026-01-05 00:21:03.273626 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBROwB81xQ3N0BeeVNK0X6OwK5pQdaxFGmjHjQZv/kQk) 2026-01-05 00:21:03.273636 | orchestrator | 2026-01-05 00:21:03.273645 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:03.273655 | orchestrator | Monday 05 January 2026 00:20:59 +0000 (0:00:01.141) 0:00:10.116 ******** 2026-01-05 00:21:03.273665 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0LNO/jPQL2uD9QR1Ww4wyXs1n1Gt9OebHcvQjqgqKX4vPmnG/tv24BaINTh0dLHb+CPagkAU/TlWeoshrqhuLzeW4u9ic4FWh0mkQUEQSVwjnZh0lyFRiOfxI496bu5o3JG1UJTe/NkRZUfQRQLDAN4u9ZLe7vyj+QCBImsGE2zhuI+vNtFBrYTrmV/r5iT9sysf44Q5vXve3tUKWqK2VTEDoRZBl0z15C+zg1+eAXwuGNq58EezrblbimjGqbLcaXNz1cDOTV8pUONnfvljVmNWXuL5hvHA/x075B0KQd0bS6SA8ekZfo8yI4QeTm41tEjnfrHIhPTunEHyicx032gU7vxE7ki+wh2Aa8URN8J/zO9frNBhSYMmkEdVNoJ+MkRJHwZKA/fdYoQRInAtV5/p2nfVxNODxVoP7EctQt8jVn/qkIWSAohTcsbW1uaff0Pa9Fkkji8HjTHx2QfVAFoo6CDc9BV4yPuGqlrInkBYf2VB2OQlm4jpUv6RZtrs=) 2026-01-05 00:21:03.273685 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCnf36KNOW16lCUv2pVh44hEr6Ar0PB11CW6ONMrEmGU9vyNM7GhkK3NQ5MbCS2o754FlUi3+RMbQpI5WdHfU3U=) 2026-01-05 00:21:03.273695 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIdQb61nfGL4J8Z/+pvne1mXbBLQyJFk+pBeD1a9MdWo) 2026-01-05 00:21:03.273705 | orchestrator | 2026-01-05 00:21:03.273714 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:03.273724 | orchestrator | Monday 05 January 2026 00:21:00 +0000 (0:00:01.155) 0:00:11.272 ******** 2026-01-05 00:21:03.273809 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD0V1bUKT7l//BwfMa1gE1TRQG7GmdFXF57SYwnCfAAfGxzT/jTf9WOVUw52/lLdI9TtrV4EVii7exEv+VtwBC3F2e79DAyUFRGK5Cm5RcxrmDOV+Q/VMjS2D8Xg7+TDLpdfzpHURDbA7Vi6H7lAvJkzd/7N329frkC9La0FwIPE8KR8nbm4jHloc4QeFUuiI0Or+aWiUb43lD96bnYE70H/Tu8zdsDQmvvGC9N9f9JKkn+nkHYTK4zXoTB2547QV8kNFzvpBThNaSMWucpGWfy4mxUvAUAhRS3HDLcgYv2R40KM9dUJaxpUYvS/9jHBxgThLitSv0nP6lOLLMqJ7C3UKLOIFE9tNHS5LGwiVE58FfDIApqrvYWl1g9CUIxRtF6dyXeqdnlLFtVpa3SWhJxdgvxj48/LBe54v2uVsNgpfHY+g79ZlDz3IFwlWOYrpNhaj8owKxKB9z11sNI6zY09U9xaNKqE3woQF9dt8uFObCVIYwof2/BtucgmIkV6OE=) 2026-01-05 00:21:03.273820 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNQu5sy/cCCXns03Ba3sPkYJBFw0rAJRYJFJw11Fj72vExW1dF8DXst/Kj6NYVK60TgZWf81clKSjrWNGAc1QW8=) 2026-01-05 00:21:03.273830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWrmqdcE3pVPhOj2aQFIcJ96DiQqGTRSWhFe4p/kku8) 2026-01-05 00:21:03.273840 | orchestrator | 2026-01-05 00:21:03.273850 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:03.273859 | orchestrator | Monday 05 January 2026 00:21:02 +0000 (0:00:01.137) 0:00:12.409 ******** 2026-01-05 00:21:03.273878 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSsOh6k0HWPWrCWxdNMyWwls0v8fG6BjRFFW8UK+Zj9LlWmrxNr19JaEuz2X0tdeTmdvge9AYrUGYRtLSRdPeA3CijaTHauKJyXXuR+XNjFW/gp1wSQ+7MJ6/4+ClcqZFZT9dgFognWZ8vPFXZoZz0BH3nFUAcsWHoONkG6pZf42LznmUCgdEn6FNQ7rEvNk+V6LuxtFp8fvGy8MsD29Ufk4yte1PpaC9nkfQ5NF7iPWPKsDatO2Hg2tn1w49hldghConGxNmt5+dFMPr6KYlnC5yrJSbQ6UusU8jenuruxBwP31P0W+s3xX2M/sTxU1FaiJ1jVNXDL6oRDNzlI3Mer5AP+I2nx0oGnQZ9vRjXX4vzxzt/b33Q+Hk74Xng2pxfG+90tGCTlazoo0VBjiNNf5Ex9OGm2lT3ADD/t0qgpGm2cV19jsZHkHPqIp3IDh3e1YwQRqY49/WE/jMifIejOmhwn3vn3qd0RbNY2nEFBmHLpj6mLaOxHDr2niQNvo0=) 2026-01-05 00:21:14.600962 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLqSgzBKVVeWmIfJbSTn8OhU+/PewuWCid5lKh5peLsndXVloz4Qlku6PUvuEyQ+guJ18FwDmxCvVAJHLcYYioY=) 2026-01-05 00:21:14.601095 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0znfGBdGD5rc24H53xxI++KrK/YbuypBte1xDNQzS3) 2026-01-05 00:21:14.601112 | orchestrator | 2026-01-05 00:21:14.601126 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:14.601140 | orchestrator | Monday 05 January 2026 00:21:03 +0000 (0:00:01.146) 0:00:13.555 ******** 2026-01-05 00:21:14.601151 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDHcpXWKxQ8pNmdzb5fjCTy3cBRQ/x+ecgMF01ulIGIYU8a930muamERZPHPO+9tIA2MjfQlvqxScKbd+rn9qCU=) 2026-01-05 00:21:14.601165 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYGEv+9WjOQtXpKTr95PS2dpUh/VNGUAxSLPJuzSQILW1CDyVhV+GQIGXRdvV08HTYH55hwTX15gDCmI4U9JYRiXsHwuS8njEvsbiLSKdMchVXiotNr0W4lOu8nO++alnZC7S4UywQhyJ+d7N8R4sosY6kDx8F4r8+GdlNFaNp4mSObtmg04MzkeQ2/TJPjDCnDmHI5ZP+THQNpDOTrC8+rnR9h5pqact117/zlxpOn3oQpevBknnx9kbzyTjWEEUGdzt7hv50ykaV9a5HlP2vliJsAJPvCQdTozrCz9BDOANFr9KBOnoCQdSR319zkGPTwv50xl6JIwOT0roOkDlGBC/32xzNNF6XbN3gf6Ozo0gUoCdv5D9NF6QOVO4NKmgzyRlW4AhnOmvEqT2j+aF0AJ/ZdsVlmXc+RuRDQYMzTSW6M3GBtNxqG1HFcnayXdMD7qYdFLcTRmE7v63PEmX1ZKSPJ4LRkPBUBIqzP1N0uw3FnyPjuWQ3DjZpNUKzm6k=) 2026-01-05 00:21:14.601208 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIHEhGE38DgggBWhCE4aME/qOwvFuadqg1MOvHuFxy/R) 2026-01-05 00:21:14.601220 | orchestrator | 2026-01-05 00:21:14.601232 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-05 00:21:14.601244 | orchestrator | Monday 05 January 2026 00:21:04 +0000 (0:00:01.165) 0:00:14.720 ******** 2026-01-05 00:21:14.601256 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:21:14.601268 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:21:14.601279 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:21:14.601290 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:21:14.601301 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:21:14.601312 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:21:14.601323 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:21:14.601334 | orchestrator | 2026-01-05 00:21:14.601345 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-05 00:21:14.601358 | orchestrator | Monday 05 January 2026 00:21:09 +0000 (0:00:05.528) 0:00:20.249 ******** 2026-01-05 00:21:14.601443 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-05 00:21:14.601458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-05 00:21:14.601470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-05 00:21:14.601481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-05 00:21:14.601491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-05 00:21:14.601502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-05 00:21:14.601512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-05 00:21:14.601523 | orchestrator | 2026-01-05 00:21:14.601534 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:14.601544 | orchestrator | Monday 05 January 2026 00:21:10 +0000 (0:00:00.209) 0:00:20.458 ******** 2026-01-05 00:21:14.601555 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIONNxqu/2RyFcjrNt4fLSE8Jon/cDnYP0NzA/C+qVUjj) 2026-01-05 00:21:14.601604 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkeOTyabUREaa5uSaHQyttQVvKi/F0Pu+CXcsqhLJEJlSNeIo/6lsbuS2z5HCDYKR61ldSEG2lkVWgZGWXbULxXTjlI0aHvm8JaMpQBq6utbO0sxpOGuBNuAbIveYjbvKGsrnV0pFTf1a0Dq1Rp5kjE3HbD3WZTRYXEMkZyvsvhTYtaLcPHLJEzjZor+hAZMil0uIwfpDI3t9bUCaK4LOQUuFmWj5hbQ+zhsfL7MmyO5be32vJSzUSIyfWsIvr0sDvTPvQr3hWvpdigq2UioQOw855XRtDxD7cu/3fgTyx14B+llGYwJ+qptZViF1E3bhTddBMssWAYXfv5BSI9HLALWtXpsgrI/K20X8CqqaC978dKdoeD/wDC5vFCRgRzWywOPmy8kdK6Wb1ZY1SR3Y0znwxy0+izEDpYkEpjvGRmZu4udwoVltIKUzCFgylNSnob/UyWf62OMMiyyOQ/8h0tpqUOGkFsydiYbVk4aI4AuB7FAPXI/quGZ5NB6EZZRU=) 2026-01-05 00:21:14.601637 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEvVDpBkB6AKU0bFM2Eyqid0+3ikaMXq90KUViE37UXDc5fyQ3aAjVJ9QMJojwispr5IEyBH3irlSnXNcieNeME=) 2026-01-05 00:21:14.601649 | orchestrator | 2026-01-05 00:21:14.601660 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:14.601676 | orchestrator | Monday 05 January 2026 00:21:11 +0000 (0:00:01.170) 0:00:21.629 ******** 2026-01-05 00:21:14.601688 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO0e7Ky/b7JUMgAIFIC4mPKd3jax12d5tPb2FhTQ07Y9omQ4L6GQzJFm4QLMj/Z117TlzpoE3cmB5BcWMiXckYk=) 2026-01-05 00:21:14.601699 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPhAUeJi6ZVSZuRK6nyEfD268ZnjFKenkA57h04gptxTAZidb5apLUQWZxkKxloN0AL/obk4pu7RL/sCZAaCCdcuFGi4WzlrOpwAdBmCSjf/Lfj7hMCX1nsFSSrIXdaC22FIK127GixhI/rl7fBhW3Mi5XNhTcDwrZ3nGUb+4fOmHvJNhazW97RHhE+/Tv+zG09wukHKy+6oAj0mGsM/dS3a9oe/C1/kgyOtcYZq05cvhOlBidE5aiFb8ItapxT0C1uIm6bQ8fEVSbvCjLE+weC7pvKHlVQIdR/rby8mkSeRfa0yArgD55iJHmlR5ePyT/VjDmMy3rYqyaOI8PJgy+AgJqg6EnP8Kf10qO8h8JG3T1OFcYC26vXYm80J132RBgBSvqa8ES+njjHRvH5F7mVB2zs0x4t3acw+u2xCJb9JCoW8FJxnGjlnAvZZEUh5ESthcaMxTWHRaRtM675dwxFocqrzZzF3vQ6t6x8MwNIot9xkAOPZgAxCUl0TUC7uM=) 2026-01-05 00:21:14.601711 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA/viRAPTWH8JxkhttpEJBU/ubwSNonODAYlJyRlvAzv) 2026-01-05 00:21:14.601721 | orchestrator | 2026-01-05 00:21:14.601732 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:14.601743 | orchestrator | Monday 05 January 2026 00:21:12 +0000 (0:00:01.106) 0:00:22.736 ******** 2026-01-05 00:21:14.601753 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBROwB81xQ3N0BeeVNK0X6OwK5pQdaxFGmjHjQZv/kQk) 2026-01-05 00:21:14.601765 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDL3UFImxmVu5M/eRBupyZ28IVIVTdJORo+TB9U9XXZqq4I5QloDWdwelU492ckgdwVged0xrtLfYWdRJmPbMUFZpq2TwmeuriHngxOUXnZR9YxsPTtr3r/XAgKY3qHMbMrlXSLtEFoxPqFhYNePz7La+oaXHlgkB3+gcCR/IeQNS3ymKmMoD3SQhPrntYkJ4+UK1gtUZeH3/QPEVA8YTlcQpkhcXLE16OyjPYJGeNupYjJH0Zk1VRXj5m1gU1yVYwZ3qb76SErGpxebdrRtdyM3OdkAOZP3CPkgF1Bx8FiCtfVg8JbuAefXV/COG8dz/zWWaAcx2sukC1jXknsWQocYD0XRzfmyTxRhNYWPe/jE/M4S1qvw1ih+ZikuCuM/y8zH4HasJMi0nSAwsMItyFkKHmc8EWv1E699BGxUm7fUah/K/KA3t44IfPtSkabmW6ur5d+++d5M9Eem7MAFAgg7Ogp+Pt/AskZR7TOxHCgmf88fZmkqqQV8PvVF8Xn42M=) 2026-01-05 00:21:14.601776 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIJyyyUIUjzNoXlvxXIsWKE3GwmF3PF6+A9847htXlW3HPAbJW/F4D2XHkGiRJH+7JW7xMEc2tlCLrQIeaCbjZI=) 2026-01-05 00:21:14.601787 | orchestrator | 2026-01-05 00:21:14.601798 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:14.601809 | orchestrator | Monday 05 January 2026 00:21:13 +0000 (0:00:01.112) 0:00:23.848 ******** 2026-01-05 00:21:14.601820 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0LNO/jPQL2uD9QR1Ww4wyXs1n1Gt9OebHcvQjqgqKX4vPmnG/tv24BaINTh0dLHb+CPagkAU/TlWeoshrqhuLzeW4u9ic4FWh0mkQUEQSVwjnZh0lyFRiOfxI496bu5o3JG1UJTe/NkRZUfQRQLDAN4u9ZLe7vyj+QCBImsGE2zhuI+vNtFBrYTrmV/r5iT9sysf44Q5vXve3tUKWqK2VTEDoRZBl0z15C+zg1+eAXwuGNq58EezrblbimjGqbLcaXNz1cDOTV8pUONnfvljVmNWXuL5hvHA/x075B0KQd0bS6SA8ekZfo8yI4QeTm41tEjnfrHIhPTunEHyicx032gU7vxE7ki+wh2Aa8URN8J/zO9frNBhSYMmkEdVNoJ+MkRJHwZKA/fdYoQRInAtV5/p2nfVxNODxVoP7EctQt8jVn/qkIWSAohTcsbW1uaff0Pa9Fkkji8HjTHx2QfVAFoo6CDc9BV4yPuGqlrInkBYf2VB2OQlm4jpUv6RZtrs=) 2026-01-05 00:21:14.601831 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIdQb61nfGL4J8Z/+pvne1mXbBLQyJFk+pBeD1a9MdWo) 2026-01-05 00:21:14.601859 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCnf36KNOW16lCUv2pVh44hEr6Ar0PB11CW6ONMrEmGU9vyNM7GhkK3NQ5MbCS2o754FlUi3+RMbQpI5WdHfU3U=) 2026-01-05 00:21:18.674744 | orchestrator | 2026-01-05 00:21:18.674883 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:18.674902 | orchestrator | Monday 05 January 2026 00:21:14 +0000 (0:00:01.037) 0:00:24.886 ******** 2026-01-05 00:21:18.674917 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD0V1bUKT7l//BwfMa1gE1TRQG7GmdFXF57SYwnCfAAfGxzT/jTf9WOVUw52/lLdI9TtrV4EVii7exEv+VtwBC3F2e79DAyUFRGK5Cm5RcxrmDOV+Q/VMjS2D8Xg7+TDLpdfzpHURDbA7Vi6H7lAvJkzd/7N329frkC9La0FwIPE8KR8nbm4jHloc4QeFUuiI0Or+aWiUb43lD96bnYE70H/Tu8zdsDQmvvGC9N9f9JKkn+nkHYTK4zXoTB2547QV8kNFzvpBThNaSMWucpGWfy4mxUvAUAhRS3HDLcgYv2R40KM9dUJaxpUYvS/9jHBxgThLitSv0nP6lOLLMqJ7C3UKLOIFE9tNHS5LGwiVE58FfDIApqrvYWl1g9CUIxRtF6dyXeqdnlLFtVpa3SWhJxdgvxj48/LBe54v2uVsNgpfHY+g79ZlDz3IFwlWOYrpNhaj8owKxKB9z11sNI6zY09U9xaNKqE3woQF9dt8uFObCVIYwof2/BtucgmIkV6OE=) 2026-01-05 00:21:18.674932 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNQu5sy/cCCXns03Ba3sPkYJBFw0rAJRYJFJw11Fj72vExW1dF8DXst/Kj6NYVK60TgZWf81clKSjrWNGAc1QW8=) 2026-01-05 00:21:18.674946 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWrmqdcE3pVPhOj2aQFIcJ96DiQqGTRSWhFe4p/kku8) 2026-01-05 00:21:18.674957 | orchestrator | 2026-01-05 00:21:18.674967 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:18.674977 | orchestrator | Monday 05 January 2026 00:21:15 +0000 (0:00:01.052) 0:00:25.938 ******** 2026-01-05 00:21:18.674987 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLqSgzBKVVeWmIfJbSTn8OhU+/PewuWCid5lKh5peLsndXVloz4Qlku6PUvuEyQ+guJ18FwDmxCvVAJHLcYYioY=) 2026-01-05 00:21:18.674997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0znfGBdGD5rc24H53xxI++KrK/YbuypBte1xDNQzS3) 2026-01-05 00:21:18.675007 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSsOh6k0HWPWrCWxdNMyWwls0v8fG6BjRFFW8UK+Zj9LlWmrxNr19JaEuz2X0tdeTmdvge9AYrUGYRtLSRdPeA3CijaTHauKJyXXuR+XNjFW/gp1wSQ+7MJ6/4+ClcqZFZT9dgFognWZ8vPFXZoZz0BH3nFUAcsWHoONkG6pZf42LznmUCgdEn6FNQ7rEvNk+V6LuxtFp8fvGy8MsD29Ufk4yte1PpaC9nkfQ5NF7iPWPKsDatO2Hg2tn1w49hldghConGxNmt5+dFMPr6KYlnC5yrJSbQ6UusU8jenuruxBwP31P0W+s3xX2M/sTxU1FaiJ1jVNXDL6oRDNzlI3Mer5AP+I2nx0oGnQZ9vRjXX4vzxzt/b33Q+Hk74Xng2pxfG+90tGCTlazoo0VBjiNNf5Ex9OGm2lT3ADD/t0qgpGm2cV19jsZHkHPqIp3IDh3e1YwQRqY49/WE/jMifIejOmhwn3vn3qd0RbNY2nEFBmHLpj6mLaOxHDr2niQNvo0=) 2026-01-05 00:21:18.675018 | orchestrator | 2026-01-05 00:21:18.675029 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:21:18.675038 | orchestrator | Monday 05 January 2026 00:21:16 +0000 (0:00:01.004) 0:00:26.943 ******** 2026-01-05 00:21:18.675048 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDHcpXWKxQ8pNmdzb5fjCTy3cBRQ/x+ecgMF01ulIGIYU8a930muamERZPHPO+9tIA2MjfQlvqxScKbd+rn9qCU=) 2026-01-05 00:21:18.675076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYGEv+9WjOQtXpKTr95PS2dpUh/VNGUAxSLPJuzSQILW1CDyVhV+GQIGXRdvV08HTYH55hwTX15gDCmI4U9JYRiXsHwuS8njEvsbiLSKdMchVXiotNr0W4lOu8nO++alnZC7S4UywQhyJ+d7N8R4sosY6kDx8F4r8+GdlNFaNp4mSObtmg04MzkeQ2/TJPjDCnDmHI5ZP+THQNpDOTrC8+rnR9h5pqact117/zlxpOn3oQpevBknnx9kbzyTjWEEUGdzt7hv50ykaV9a5HlP2vliJsAJPvCQdTozrCz9BDOANFr9KBOnoCQdSR319zkGPTwv50xl6JIwOT0roOkDlGBC/32xzNNF6XbN3gf6Ozo0gUoCdv5D9NF6QOVO4NKmgzyRlW4AhnOmvEqT2j+aF0AJ/ZdsVlmXc+RuRDQYMzTSW6M3GBtNxqG1HFcnayXdMD7qYdFLcTRmE7v63PEmX1ZKSPJ4LRkPBUBIqzP1N0uw3FnyPjuWQ3DjZpNUKzm6k=) 2026-01-05 00:21:18.675088 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIHEhGE38DgggBWhCE4aME/qOwvFuadqg1MOvHuFxy/R) 2026-01-05 00:21:18.675124 | orchestrator | 2026-01-05 00:21:18.675134 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-05 00:21:18.675144 | orchestrator | Monday 05 January 2026 00:21:17 +0000 (0:00:00.958) 0:00:27.901 ******** 2026-01-05 00:21:18.675154 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-05 00:21:18.675164 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-05 00:21:18.675174 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-05 00:21:18.675183 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-05 00:21:18.675193 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 00:21:18.675202 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-05 00:21:18.675212 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-05 00:21:18.675222 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:21:18.675232 | orchestrator | 2026-01-05 00:21:18.675258 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-05 00:21:18.675270 | orchestrator | Monday 05 January 2026 00:21:17 +0000 (0:00:00.154) 0:00:28.056 ******** 2026-01-05 00:21:18.675282 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:21:18.675292 | orchestrator | 2026-01-05 00:21:18.675304 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-05 00:21:18.675315 | orchestrator | Monday 05 January 2026 00:21:17 +0000 (0:00:00.049) 0:00:28.105 ******** 2026-01-05 00:21:18.675327 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:21:18.675338 | orchestrator | 2026-01-05 00:21:18.675349 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-05 00:21:18.675359 | orchestrator | Monday 05 January 2026 00:21:17 +0000 (0:00:00.045) 0:00:28.151 ******** 2026-01-05 00:21:18.675397 | orchestrator | changed: [testbed-manager] 2026-01-05 00:21:18.675416 | orchestrator | 2026-01-05 00:21:18.675433 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:21:18.675457 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:21:18.675476 | orchestrator | 2026-01-05 00:21:18.675494 | orchestrator | 2026-01-05 00:21:18.675512 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:21:18.675529 | orchestrator | Monday 05 January 2026 00:21:18 +0000 (0:00:00.654) 0:00:28.805 ******** 2026-01-05 00:21:18.675545 | orchestrator | =============================================================================== 2026-01-05 00:21:18.675560 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.20s 2026-01-05 00:21:18.675571 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.53s 2026-01-05 00:21:18.675583 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-01-05 00:21:18.675594 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-01-05 00:21:18.675606 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-01-05 00:21:18.675616 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-01-05 00:21:18.675626 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-01-05 00:21:18.675635 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-01-05 00:21:18.675645 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-05 00:21:18.675654 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-05 00:21:18.675664 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-05 00:21:18.675673 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-05 00:21:18.675693 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-05 00:21:18.675702 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-05 00:21:18.675712 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-05 00:21:18.675722 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-01-05 00:21:18.675732 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.65s 2026-01-05 00:21:18.675742 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.21s 2026-01-05 00:21:18.675752 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-01-05 00:21:18.675761 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-01-05 00:21:18.901489 | orchestrator | + osism apply squid 2026-01-05 00:21:31.016890 | orchestrator | 2026-01-05 00:21:31 | INFO  | Task 1e23b7c3-8d70-487e-a917-b4acee0b8a2b (squid) was prepared for execution. 2026-01-05 00:21:31.017014 | orchestrator | 2026-01-05 00:21:31 | INFO  | It takes a moment until task 1e23b7c3-8d70-487e-a917-b4acee0b8a2b (squid) has been started and output is visible here. 2026-01-05 00:23:25.005862 | orchestrator | 2026-01-05 00:23:25.005996 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-05 00:23:25.006071 | orchestrator | 2026-01-05 00:23:25.006089 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-05 00:23:25.006101 | orchestrator | Monday 05 January 2026 00:21:34 +0000 (0:00:00.151) 0:00:00.151 ******** 2026-01-05 00:23:25.006113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:23:25.006125 | orchestrator | 2026-01-05 00:23:25.006136 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-05 00:23:25.006148 | orchestrator | Monday 05 January 2026 00:21:34 +0000 (0:00:00.076) 0:00:00.228 ******** 2026-01-05 00:23:25.006159 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:25.006171 | orchestrator | 2026-01-05 00:23:25.006182 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-05 00:23:25.006193 | orchestrator | Monday 05 January 2026 00:21:36 +0000 (0:00:01.324) 0:00:01.552 ******** 2026-01-05 00:23:25.006205 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-05 00:23:25.006216 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-05 00:23:25.006227 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-05 00:23:25.006238 | orchestrator | 2026-01-05 00:23:25.006249 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-05 00:23:25.006260 | orchestrator | Monday 05 January 2026 00:21:37 +0000 (0:00:01.200) 0:00:02.752 ******** 2026-01-05 00:23:25.006271 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-05 00:23:25.006282 | orchestrator | 2026-01-05 00:23:25.006293 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-05 00:23:25.006304 | orchestrator | Monday 05 January 2026 00:21:38 +0000 (0:00:01.083) 0:00:03.836 ******** 2026-01-05 00:23:25.006314 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:25.006326 | orchestrator | 2026-01-05 00:23:25.006337 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-05 00:23:25.006348 | orchestrator | Monday 05 January 2026 00:21:38 +0000 (0:00:00.375) 0:00:04.211 ******** 2026-01-05 00:23:25.006359 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:25.006370 | orchestrator | 2026-01-05 00:23:25.006388 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-05 00:23:25.006401 | orchestrator | Monday 05 January 2026 00:21:39 +0000 (0:00:00.950) 0:00:05.162 ******** 2026-01-05 00:23:25.006415 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-05 00:23:25.006460 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:25.006474 | orchestrator | 2026-01-05 00:23:25.006488 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-05 00:23:25.006501 | orchestrator | Monday 05 January 2026 00:22:11 +0000 (0:00:31.978) 0:00:37.140 ******** 2026-01-05 00:23:25.006533 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:25.006546 | orchestrator | 2026-01-05 00:23:25.006559 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-05 00:23:25.006572 | orchestrator | Monday 05 January 2026 00:22:23 +0000 (0:00:12.043) 0:00:49.184 ******** 2026-01-05 00:23:25.006585 | orchestrator | Pausing for 60 seconds 2026-01-05 00:23:25.006598 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:25.006612 | orchestrator | 2026-01-05 00:23:25.006625 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-05 00:23:25.006636 | orchestrator | Monday 05 January 2026 00:23:24 +0000 (0:01:00.075) 0:01:49.260 ******** 2026-01-05 00:23:25.006646 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:25.006657 | orchestrator | 2026-01-05 00:23:25.006668 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-05 00:23:25.006679 | orchestrator | Monday 05 January 2026 00:23:24 +0000 (0:00:00.079) 0:01:49.340 ******** 2026-01-05 00:23:25.006689 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:25.006700 | orchestrator | 2026-01-05 00:23:25.006718 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:23:25.006739 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:23:25.006757 | orchestrator | 2026-01-05 00:23:25.006775 | orchestrator | 2026-01-05 00:23:25.006795 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:23:25.006816 | orchestrator | Monday 05 January 2026 00:23:24 +0000 (0:00:00.634) 0:01:49.974 ******** 2026-01-05 00:23:25.006836 | orchestrator | =============================================================================== 2026-01-05 00:23:25.006856 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-01-05 00:23:25.006868 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.98s 2026-01-05 00:23:25.006879 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.04s 2026-01-05 00:23:25.006910 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.32s 2026-01-05 00:23:25.006921 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.20s 2026-01-05 00:23:25.006932 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2026-01-05 00:23:25.006943 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2026-01-05 00:23:25.006954 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-01-05 00:23:25.006965 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-01-05 00:23:25.006975 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-01-05 00:23:25.006986 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-05 00:23:25.321092 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-05 00:23:25.321245 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-05 00:23:25.370137 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:23:25.370249 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-01-05 00:23:25.375343 | orchestrator | + set -e 2026-01-05 00:23:25.375407 | orchestrator | + NAMESPACE=kolla/release 2026-01-05 00:23:25.375443 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-05 00:23:25.381839 | orchestrator | ++ semver 9.5.0 9.0.0 2026-01-05 00:23:25.456173 | orchestrator | + [[ 1 -lt 0 ]] 2026-01-05 00:23:25.456925 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-05 00:23:37.560375 | orchestrator | 2026-01-05 00:23:37 | INFO  | Task d442ce64-7de9-40e8-83e7-17f109667cbf (operator) was prepared for execution. 2026-01-05 00:23:37.560589 | orchestrator | 2026-01-05 00:23:37 | INFO  | It takes a moment until task d442ce64-7de9-40e8-83e7-17f109667cbf (operator) has been started and output is visible here. 2026-01-05 00:23:53.694818 | orchestrator | 2026-01-05 00:23:53.694921 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-05 00:23:53.694933 | orchestrator | 2026-01-05 00:23:53.694942 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:23:53.694950 | orchestrator | Monday 05 January 2026 00:23:41 +0000 (0:00:00.145) 0:00:00.145 ******** 2026-01-05 00:23:53.694957 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:23:53.694966 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:23:53.694973 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:23:53.694980 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:23:53.694987 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:23:53.694994 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:23:53.695001 | orchestrator | 2026-01-05 00:23:53.695009 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-05 00:23:53.695016 | orchestrator | Monday 05 January 2026 00:23:45 +0000 (0:00:03.278) 0:00:03.423 ******** 2026-01-05 00:23:53.695023 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:23:53.695030 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:23:53.695037 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:23:53.695044 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:23:53.695051 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:23:53.695058 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:23:53.695065 | orchestrator | 2026-01-05 00:23:53.695072 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-05 00:23:53.695079 | orchestrator | 2026-01-05 00:23:53.695086 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-05 00:23:53.695093 | orchestrator | Monday 05 January 2026 00:23:45 +0000 (0:00:00.843) 0:00:04.266 ******** 2026-01-05 00:23:53.695100 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:23:53.695108 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:23:53.695115 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:23:53.695141 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:23:53.695149 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:23:53.695156 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:23:53.695163 | orchestrator | 2026-01-05 00:23:53.695170 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-05 00:23:53.695177 | orchestrator | Monday 05 January 2026 00:23:46 +0000 (0:00:00.199) 0:00:04.465 ******** 2026-01-05 00:23:53.695184 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:23:53.695191 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:23:53.695198 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:23:53.695205 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:23:53.695212 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:23:53.695219 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:23:53.695226 | orchestrator | 2026-01-05 00:23:53.695233 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-05 00:23:53.695240 | orchestrator | Monday 05 January 2026 00:23:46 +0000 (0:00:00.193) 0:00:04.659 ******** 2026-01-05 00:23:53.695248 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:53.695258 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:53.695271 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:53.695285 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:53.695298 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:53.695311 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:53.695324 | orchestrator | 2026-01-05 00:23:53.695336 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-05 00:23:53.695348 | orchestrator | Monday 05 January 2026 00:23:46 +0000 (0:00:00.635) 0:00:05.295 ******** 2026-01-05 00:23:53.695361 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:53.695373 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:53.695385 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:53.695423 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:53.695437 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:53.695451 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:53.695463 | orchestrator | 2026-01-05 00:23:53.695477 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-05 00:23:53.695490 | orchestrator | Monday 05 January 2026 00:23:47 +0000 (0:00:00.839) 0:00:06.135 ******** 2026-01-05 00:23:53.695505 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-05 00:23:53.695518 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-05 00:23:53.695531 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-05 00:23:53.695568 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-05 00:23:53.695576 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-05 00:23:53.695585 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-05 00:23:53.695594 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-05 00:23:53.695602 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-05 00:23:53.695610 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-05 00:23:53.695618 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-05 00:23:53.695627 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-05 00:23:53.695635 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-05 00:23:53.695644 | orchestrator | 2026-01-05 00:23:53.695652 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-05 00:23:53.695660 | orchestrator | Monday 05 January 2026 00:23:48 +0000 (0:00:01.187) 0:00:07.323 ******** 2026-01-05 00:23:53.695668 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:53.695676 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:53.695684 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:53.695693 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:53.695702 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:53.695710 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:53.695719 | orchestrator | 2026-01-05 00:23:53.695728 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-05 00:23:53.695736 | orchestrator | Monday 05 January 2026 00:23:50 +0000 (0:00:01.202) 0:00:08.526 ******** 2026-01-05 00:23:53.695743 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-05 00:23:53.695750 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-05 00:23:53.695757 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-05 00:23:53.695764 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:53.695790 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:53.695798 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:53.695805 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:53.695812 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:53.695819 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:23:53.695826 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:53.695833 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:53.695840 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:53.695847 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:53.695854 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:53.695861 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-05 00:23:53.695868 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:53.695875 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:53.695882 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:53.695897 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:53.695905 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:53.695912 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:23:53.695919 | orchestrator | 2026-01-05 00:23:53.695926 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-05 00:23:53.695934 | orchestrator | Monday 05 January 2026 00:23:51 +0000 (0:00:01.257) 0:00:09.784 ******** 2026-01-05 00:23:53.695941 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:53.695948 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:53.695956 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:53.695965 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:53.695973 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:53.695982 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:53.695990 | orchestrator | 2026-01-05 00:23:53.695999 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-05 00:23:53.696007 | orchestrator | Monday 05 January 2026 00:23:51 +0000 (0:00:00.198) 0:00:09.983 ******** 2026-01-05 00:23:53.696016 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:53.696024 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:53.696032 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:53.696041 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:53.696049 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:53.696058 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:53.696066 | orchestrator | 2026-01-05 00:23:53.696075 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-05 00:23:53.696083 | orchestrator | Monday 05 January 2026 00:23:51 +0000 (0:00:00.235) 0:00:10.218 ******** 2026-01-05 00:23:53.696092 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:53.696100 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:53.696108 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:53.696117 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:53.696125 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:53.696134 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:53.696142 | orchestrator | 2026-01-05 00:23:53.696150 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-05 00:23:53.696159 | orchestrator | Monday 05 January 2026 00:23:52 +0000 (0:00:00.615) 0:00:10.833 ******** 2026-01-05 00:23:53.696167 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:53.696176 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:53.696184 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:53.696192 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:53.696201 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:53.696209 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:53.696217 | orchestrator | 2026-01-05 00:23:53.696226 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-05 00:23:53.696235 | orchestrator | Monday 05 January 2026 00:23:52 +0000 (0:00:00.173) 0:00:11.007 ******** 2026-01-05 00:23:53.696243 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-05 00:23:53.696262 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:23:53.696271 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:23:53.696280 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:53.696288 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:53.696297 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:53.696305 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:23:53.696314 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:53.696322 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 00:23:53.696331 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:53.696339 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:23:53.696347 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:53.696362 | orchestrator | 2026-01-05 00:23:53.696371 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-05 00:23:53.696379 | orchestrator | Monday 05 January 2026 00:23:53 +0000 (0:00:00.682) 0:00:11.689 ******** 2026-01-05 00:23:53.696388 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:53.696396 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:53.696405 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:53.696413 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:53.696421 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:53.696430 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:53.696438 | orchestrator | 2026-01-05 00:23:53.696447 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-05 00:23:53.696455 | orchestrator | Monday 05 January 2026 00:23:53 +0000 (0:00:00.160) 0:00:11.849 ******** 2026-01-05 00:23:53.696464 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:53.696473 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:53.696481 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:53.696489 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:53.696504 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:55.056733 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:55.056843 | orchestrator | 2026-01-05 00:23:55.056859 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-05 00:23:55.056872 | orchestrator | Monday 05 January 2026 00:23:53 +0000 (0:00:00.177) 0:00:12.027 ******** 2026-01-05 00:23:55.056883 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:55.056894 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:55.056905 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:55.056916 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:55.056927 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:55.056938 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:55.056948 | orchestrator | 2026-01-05 00:23:55.056960 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-05 00:23:55.056970 | orchestrator | Monday 05 January 2026 00:23:53 +0000 (0:00:00.152) 0:00:12.180 ******** 2026-01-05 00:23:55.056981 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:23:55.056992 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:23:55.057003 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:23:55.057013 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:23:55.057024 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:23:55.057051 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:23:55.057073 | orchestrator | 2026-01-05 00:23:55.057084 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-05 00:23:55.057097 | orchestrator | Monday 05 January 2026 00:23:54 +0000 (0:00:00.650) 0:00:12.830 ******** 2026-01-05 00:23:55.057108 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:23:55.057118 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:23:55.057129 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:23:55.057140 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:23:55.057175 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:23:55.057187 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:23:55.057198 | orchestrator | 2026-01-05 00:23:55.057209 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:23:55.057222 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:55.057234 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:55.057245 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:55.057257 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:55.057296 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:55.057309 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:23:55.057322 | orchestrator | 2026-01-05 00:23:55.057335 | orchestrator | 2026-01-05 00:23:55.057347 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:23:55.057361 | orchestrator | Monday 05 January 2026 00:23:54 +0000 (0:00:00.286) 0:00:13.117 ******** 2026-01-05 00:23:55.057373 | orchestrator | =============================================================================== 2026-01-05 00:23:55.057386 | orchestrator | Gathering Facts --------------------------------------------------------- 3.28s 2026-01-05 00:23:55.057399 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2026-01-05 00:23:55.057413 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-01-05 00:23:55.057425 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-01-05 00:23:55.057437 | orchestrator | Do not require tty for all users ---------------------------------------- 0.84s 2026-01-05 00:23:55.057450 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-01-05 00:23:55.057462 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2026-01-05 00:23:55.057474 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-01-05 00:23:55.057486 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-01-05 00:23:55.057499 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2026-01-05 00:23:55.057512 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2026-01-05 00:23:55.057525 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.24s 2026-01-05 00:23:55.057586 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-01-05 00:23:55.057601 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.20s 2026-01-05 00:23:55.057615 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-01-05 00:23:55.057626 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-01-05 00:23:55.057637 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-01-05 00:23:55.057647 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-01-05 00:23:55.057658 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-01-05 00:23:55.398480 | orchestrator | + osism apply --environment custom facts 2026-01-05 00:23:57.369828 | orchestrator | 2026-01-05 00:23:57 | INFO  | Trying to run play facts in environment custom 2026-01-05 00:24:07.607235 | orchestrator | 2026-01-05 00:24:07 | INFO  | Task f4eaa80c-bf39-4384-929c-aab3ca26524a (facts) was prepared for execution. 2026-01-05 00:24:07.607360 | orchestrator | 2026-01-05 00:24:07 | INFO  | It takes a moment until task f4eaa80c-bf39-4384-929c-aab3ca26524a (facts) has been started and output is visible here. 2026-01-05 00:24:49.830454 | orchestrator | 2026-01-05 00:24:49.830544 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-05 00:24:49.830550 | orchestrator | 2026-01-05 00:24:49.830555 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:24:49.830559 | orchestrator | Monday 05 January 2026 00:24:11 +0000 (0:00:00.086) 0:00:00.086 ******** 2026-01-05 00:24:49.830564 | orchestrator | ok: [testbed-manager] 2026-01-05 00:24:49.830569 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:24:49.830574 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:24:49.830619 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:24:49.830624 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:49.830628 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:49.830632 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:49.830635 | orchestrator | 2026-01-05 00:24:49.830639 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-05 00:24:49.830643 | orchestrator | Monday 05 January 2026 00:24:13 +0000 (0:00:01.410) 0:00:01.496 ******** 2026-01-05 00:24:49.830647 | orchestrator | ok: [testbed-manager] 2026-01-05 00:24:49.830651 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:49.830691 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:24:49.830695 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:24:49.830699 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:49.830703 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:49.830707 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:24:49.830710 | orchestrator | 2026-01-05 00:24:49.830714 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-05 00:24:49.830718 | orchestrator | 2026-01-05 00:24:49.830722 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:24:49.830726 | orchestrator | Monday 05 January 2026 00:24:14 +0000 (0:00:01.200) 0:00:02.697 ******** 2026-01-05 00:24:49.830730 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:49.830734 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:49.830738 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:49.830741 | orchestrator | 2026-01-05 00:24:49.830745 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:24:49.830751 | orchestrator | Monday 05 January 2026 00:24:14 +0000 (0:00:00.119) 0:00:02.816 ******** 2026-01-05 00:24:49.830754 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:49.830758 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:49.830762 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:49.830765 | orchestrator | 2026-01-05 00:24:49.830769 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:24:49.830773 | orchestrator | Monday 05 January 2026 00:24:14 +0000 (0:00:00.227) 0:00:03.044 ******** 2026-01-05 00:24:49.830776 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:49.830780 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:49.830784 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:49.830788 | orchestrator | 2026-01-05 00:24:49.830792 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:24:49.830796 | orchestrator | Monday 05 January 2026 00:24:14 +0000 (0:00:00.208) 0:00:03.252 ******** 2026-01-05 00:24:49.830801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:24:49.830806 | orchestrator | 2026-01-05 00:24:49.830810 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:24:49.830814 | orchestrator | Monday 05 January 2026 00:24:15 +0000 (0:00:00.159) 0:00:03.412 ******** 2026-01-05 00:24:49.830817 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:49.830821 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:49.830825 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:49.830828 | orchestrator | 2026-01-05 00:24:49.830832 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:24:49.830836 | orchestrator | Monday 05 January 2026 00:24:15 +0000 (0:00:00.430) 0:00:03.842 ******** 2026-01-05 00:24:49.830840 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:24:49.830843 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:24:49.830847 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:24:49.830851 | orchestrator | 2026-01-05 00:24:49.830854 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:24:49.830858 | orchestrator | Monday 05 January 2026 00:24:15 +0000 (0:00:00.150) 0:00:03.993 ******** 2026-01-05 00:24:49.830862 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:49.830866 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:49.830874 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:49.830878 | orchestrator | 2026-01-05 00:24:49.830882 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:24:49.830885 | orchestrator | Monday 05 January 2026 00:24:16 +0000 (0:00:01.036) 0:00:05.029 ******** 2026-01-05 00:24:49.830889 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:49.830893 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:49.830896 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:49.830900 | orchestrator | 2026-01-05 00:24:49.830904 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:24:49.830908 | orchestrator | Monday 05 January 2026 00:24:17 +0000 (0:00:00.461) 0:00:05.491 ******** 2026-01-05 00:24:49.830911 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:49.830915 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:49.830919 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:49.830923 | orchestrator | 2026-01-05 00:24:49.830926 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:24:49.830930 | orchestrator | Monday 05 January 2026 00:24:18 +0000 (0:00:01.028) 0:00:06.520 ******** 2026-01-05 00:24:49.830973 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:49.830978 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:49.830982 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:49.830986 | orchestrator | 2026-01-05 00:24:49.830989 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-05 00:24:49.830993 | orchestrator | Monday 05 January 2026 00:24:33 +0000 (0:00:15.380) 0:00:21.901 ******** 2026-01-05 00:24:49.830997 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:24:49.831000 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:24:49.831005 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:24:49.831009 | orchestrator | 2026-01-05 00:24:49.831014 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-05 00:24:49.831028 | orchestrator | Monday 05 January 2026 00:24:33 +0000 (0:00:00.101) 0:00:22.002 ******** 2026-01-05 00:24:49.831033 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:24:49.831037 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:24:49.831042 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:24:49.831046 | orchestrator | 2026-01-05 00:24:49.831051 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:24:49.831055 | orchestrator | Monday 05 January 2026 00:24:40 +0000 (0:00:07.179) 0:00:29.181 ******** 2026-01-05 00:24:49.831060 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:49.831064 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:49.831068 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:49.831073 | orchestrator | 2026-01-05 00:24:49.831076 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-05 00:24:49.831080 | orchestrator | Monday 05 January 2026 00:24:41 +0000 (0:00:00.442) 0:00:29.624 ******** 2026-01-05 00:24:49.831084 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-05 00:24:49.831092 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-05 00:24:49.831095 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-05 00:24:49.831099 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-05 00:24:49.831103 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-05 00:24:49.831107 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-05 00:24:49.831110 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-05 00:24:49.831114 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-05 00:24:49.831118 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-05 00:24:49.831122 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:24:49.831125 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:24:49.831133 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:24:49.831137 | orchestrator | 2026-01-05 00:24:49.831140 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:24:49.831144 | orchestrator | Monday 05 January 2026 00:24:44 +0000 (0:00:03.522) 0:00:33.146 ******** 2026-01-05 00:24:49.831148 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:49.831152 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:49.831155 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:49.831159 | orchestrator | 2026-01-05 00:24:49.831163 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:24:49.831166 | orchestrator | 2026-01-05 00:24:49.831170 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:24:49.831174 | orchestrator | Monday 05 January 2026 00:24:46 +0000 (0:00:01.322) 0:00:34.469 ******** 2026-01-05 00:24:49.831178 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:24:49.831181 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:24:49.831185 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:24:49.831189 | orchestrator | ok: [testbed-manager] 2026-01-05 00:24:49.831193 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:24:49.831196 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:24:49.831200 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:24:49.831204 | orchestrator | 2026-01-05 00:24:49.831207 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:24:49.831212 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:24:49.831217 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:24:49.831223 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:24:49.831226 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:24:49.831230 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:24:49.831235 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:24:49.831238 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:24:49.831242 | orchestrator | 2026-01-05 00:24:49.831246 | orchestrator | 2026-01-05 00:24:49.831250 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:24:49.831253 | orchestrator | Monday 05 January 2026 00:24:49 +0000 (0:00:03.681) 0:00:38.150 ******** 2026-01-05 00:24:49.831257 | orchestrator | =============================================================================== 2026-01-05 00:24:49.831261 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.38s 2026-01-05 00:24:49.831265 | orchestrator | Install required packages (Debian) -------------------------------------- 7.18s 2026-01-05 00:24:49.831269 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.68s 2026-01-05 00:24:49.831272 | orchestrator | Copy fact files --------------------------------------------------------- 3.52s 2026-01-05 00:24:49.831276 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-01-05 00:24:49.831280 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.32s 2026-01-05 00:24:49.831286 | orchestrator | Copy fact file ---------------------------------------------------------- 1.20s 2026-01-05 00:24:50.086354 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2026-01-05 00:24:50.086459 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2026-01-05 00:24:50.086500 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-01-05 00:24:50.086511 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-01-05 00:24:50.086521 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-01-05 00:24:50.086531 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-01-05 00:24:50.086540 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-01-05 00:24:50.086565 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-01-05 00:24:50.086577 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-01-05 00:24:50.086694 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-01-05 00:24:50.086714 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-01-05 00:24:50.419721 | orchestrator | + osism apply bootstrap 2026-01-05 00:25:02.771748 | orchestrator | 2026-01-05 00:25:02 | INFO  | Task ce5d4a01-926c-4760-9283-cb506a668e4b (bootstrap) was prepared for execution. 2026-01-05 00:25:02.773381 | orchestrator | 2026-01-05 00:25:02 | INFO  | It takes a moment until task ce5d4a01-926c-4760-9283-cb506a668e4b (bootstrap) has been started and output is visible here. 2026-01-05 00:25:18.993050 | orchestrator | 2026-01-05 00:25:18.993174 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-05 00:25:18.993192 | orchestrator | 2026-01-05 00:25:18.993204 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-05 00:25:18.993217 | orchestrator | Monday 05 January 2026 00:25:07 +0000 (0:00:00.153) 0:00:00.153 ******** 2026-01-05 00:25:18.993228 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:18.993241 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:18.993252 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:18.993263 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:18.993275 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:18.993286 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:18.993297 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:18.993307 | orchestrator | 2026-01-05 00:25:18.993318 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:25:18.993329 | orchestrator | 2026-01-05 00:25:18.993340 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:25:18.993351 | orchestrator | Monday 05 January 2026 00:25:07 +0000 (0:00:00.268) 0:00:00.421 ******** 2026-01-05 00:25:18.993362 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:18.993373 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:18.993383 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:18.993394 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:18.993405 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:18.993416 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:18.993426 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:18.993437 | orchestrator | 2026-01-05 00:25:18.993448 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-05 00:25:18.993459 | orchestrator | 2026-01-05 00:25:18.993470 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:25:18.993480 | orchestrator | Monday 05 January 2026 00:25:11 +0000 (0:00:03.613) 0:00:04.034 ******** 2026-01-05 00:25:18.993492 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-05 00:25:18.993504 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-05 00:25:18.993514 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-05 00:25:18.993525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-05 00:25:18.993536 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-05 00:25:18.993546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:25:18.993557 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 00:25:18.993593 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-05 00:25:18.993606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:25:18.993647 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-05 00:25:18.993660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:25:18.993673 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-05 00:25:18.993686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:25:18.993698 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-05 00:25:18.993710 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:18.993723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:25:18.993735 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-05 00:25:18.993748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:25:18.993760 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-05 00:25:18.993772 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:18.993785 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-05 00:25:18.993797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-05 00:25:18.993809 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-05 00:25:18.993821 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-05 00:25:18.993833 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-05 00:25:18.993845 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 00:25:18.993857 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-05 00:25:18.993869 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-05 00:25:18.993881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-05 00:25:18.993893 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 00:25:18.993906 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 00:25:18.993919 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-05 00:25:18.993930 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:18.993940 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 00:25:18.993951 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-05 00:25:18.993962 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:25:18.993972 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-05 00:25:18.993983 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 00:25:18.993994 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-05 00:25:18.994005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:25:18.994078 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-05 00:25:18.994091 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-05 00:25:18.994102 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 00:25:18.994113 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:18.994123 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:25:18.994134 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:18.994145 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-05 00:25:18.994184 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-05 00:25:18.994196 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-05 00:25:18.994207 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-05 00:25:18.994217 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-05 00:25:18.994228 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-05 00:25:18.994249 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:18.994260 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-05 00:25:18.994270 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-05 00:25:18.994281 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:18.994292 | orchestrator | 2026-01-05 00:25:18.994324 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-05 00:25:18.994335 | orchestrator | 2026-01-05 00:25:18.994346 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-05 00:25:18.994357 | orchestrator | Monday 05 January 2026 00:25:11 +0000 (0:00:00.475) 0:00:04.510 ******** 2026-01-05 00:25:18.994368 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:18.994379 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:18.994389 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:18.994400 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:18.994410 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:18.994421 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:18.994432 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:18.994442 | orchestrator | 2026-01-05 00:25:18.994453 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-05 00:25:18.994464 | orchestrator | Monday 05 January 2026 00:25:12 +0000 (0:00:01.206) 0:00:05.716 ******** 2026-01-05 00:25:18.994475 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:18.994485 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:18.994496 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:18.994507 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:18.994517 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:18.994527 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:18.994538 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:18.994548 | orchestrator | 2026-01-05 00:25:18.994559 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-05 00:25:18.994570 | orchestrator | Monday 05 January 2026 00:25:14 +0000 (0:00:01.288) 0:00:07.004 ******** 2026-01-05 00:25:18.994581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:18.994594 | orchestrator | 2026-01-05 00:25:18.994605 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-05 00:25:18.994672 | orchestrator | Monday 05 January 2026 00:25:14 +0000 (0:00:00.290) 0:00:07.295 ******** 2026-01-05 00:25:18.994686 | orchestrator | changed: [testbed-manager] 2026-01-05 00:25:18.994696 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:18.994707 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:18.994718 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:18.994728 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:18.994739 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:18.994750 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:18.994760 | orchestrator | 2026-01-05 00:25:18.994771 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-05 00:25:18.994782 | orchestrator | Monday 05 January 2026 00:25:16 +0000 (0:00:02.166) 0:00:09.461 ******** 2026-01-05 00:25:18.994792 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:18.994805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:18.994818 | orchestrator | 2026-01-05 00:25:18.994829 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-05 00:25:18.994840 | orchestrator | Monday 05 January 2026 00:25:16 +0000 (0:00:00.288) 0:00:09.749 ******** 2026-01-05 00:25:18.994850 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:18.994861 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:18.994871 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:18.994891 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:18.994902 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:18.994912 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:18.994923 | orchestrator | 2026-01-05 00:25:18.994934 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-05 00:25:18.994944 | orchestrator | Monday 05 January 2026 00:25:17 +0000 (0:00:00.997) 0:00:10.747 ******** 2026-01-05 00:25:18.994955 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:18.994966 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:18.994976 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:18.994987 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:18.994998 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:18.995008 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:18.995019 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:18.995029 | orchestrator | 2026-01-05 00:25:18.995040 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-05 00:25:18.995057 | orchestrator | Monday 05 January 2026 00:25:18 +0000 (0:00:00.583) 0:00:11.330 ******** 2026-01-05 00:25:18.995068 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:18.995078 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:18.995089 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:18.995099 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:18.995110 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:18.995121 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:18.995131 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:18.995142 | orchestrator | 2026-01-05 00:25:18.995153 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-05 00:25:18.995165 | orchestrator | Monday 05 January 2026 00:25:18 +0000 (0:00:00.434) 0:00:11.765 ******** 2026-01-05 00:25:18.995176 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:18.995186 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:18.995205 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:31.181823 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:31.181941 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:31.181957 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:31.181969 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:31.181980 | orchestrator | 2026-01-05 00:25:31.181993 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-05 00:25:31.182005 | orchestrator | Monday 05 January 2026 00:25:19 +0000 (0:00:00.235) 0:00:12.001 ******** 2026-01-05 00:25:31.182080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:31.182112 | orchestrator | 2026-01-05 00:25:31.182124 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-05 00:25:31.182137 | orchestrator | Monday 05 January 2026 00:25:19 +0000 (0:00:00.287) 0:00:12.288 ******** 2026-01-05 00:25:31.182148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:31.182160 | orchestrator | 2026-01-05 00:25:31.182171 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-05 00:25:31.182182 | orchestrator | Monday 05 January 2026 00:25:19 +0000 (0:00:00.361) 0:00:12.649 ******** 2026-01-05 00:25:31.182193 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:31.182206 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.182217 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.182228 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:31.182239 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.182250 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:31.182261 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.182299 | orchestrator | 2026-01-05 00:25:31.182311 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-05 00:25:31.182322 | orchestrator | Monday 05 January 2026 00:25:21 +0000 (0:00:01.386) 0:00:14.036 ******** 2026-01-05 00:25:31.182332 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:31.182343 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:31.182354 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:31.182364 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:31.182375 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:31.182386 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:31.182396 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:31.182407 | orchestrator | 2026-01-05 00:25:31.182418 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-05 00:25:31.182429 | orchestrator | Monday 05 January 2026 00:25:21 +0000 (0:00:00.343) 0:00:14.380 ******** 2026-01-05 00:25:31.182439 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.182450 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.182461 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.182472 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.182482 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:31.182493 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:31.182503 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:31.182514 | orchestrator | 2026-01-05 00:25:31.182525 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-05 00:25:31.182536 | orchestrator | Monday 05 January 2026 00:25:22 +0000 (0:00:00.583) 0:00:14.963 ******** 2026-01-05 00:25:31.182547 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:31.182557 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:31.182568 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:31.182579 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:31.182590 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:31.182601 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:31.182611 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:31.182622 | orchestrator | 2026-01-05 00:25:31.182655 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-05 00:25:31.182667 | orchestrator | Monday 05 January 2026 00:25:22 +0000 (0:00:00.309) 0:00:15.273 ******** 2026-01-05 00:25:31.182678 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.182689 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:31.182699 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:31.182710 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:31.182721 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:31.182731 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:31.182742 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:31.182752 | orchestrator | 2026-01-05 00:25:31.182763 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-05 00:25:31.182774 | orchestrator | Monday 05 January 2026 00:25:22 +0000 (0:00:00.585) 0:00:15.858 ******** 2026-01-05 00:25:31.182784 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.182795 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:31.182806 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:31.182817 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:31.182827 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:31.182838 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:31.182859 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:31.182870 | orchestrator | 2026-01-05 00:25:31.182881 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-05 00:25:31.182892 | orchestrator | Monday 05 January 2026 00:25:24 +0000 (0:00:01.178) 0:00:17.037 ******** 2026-01-05 00:25:31.182903 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.182913 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:31.182924 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:31.182935 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.182946 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.182966 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:31.182976 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.182987 | orchestrator | 2026-01-05 00:25:31.182998 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-05 00:25:31.183008 | orchestrator | Monday 05 January 2026 00:25:25 +0000 (0:00:01.001) 0:00:18.038 ******** 2026-01-05 00:25:31.183039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:31.183051 | orchestrator | 2026-01-05 00:25:31.183062 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-05 00:25:31.183073 | orchestrator | Monday 05 January 2026 00:25:25 +0000 (0:00:00.315) 0:00:18.354 ******** 2026-01-05 00:25:31.183084 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:31.183094 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:31.183105 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:31.183116 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:31.183127 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:31.183138 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:31.183148 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:31.183159 | orchestrator | 2026-01-05 00:25:31.183170 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:25:31.183181 | orchestrator | Monday 05 January 2026 00:25:26 +0000 (0:00:01.237) 0:00:19.592 ******** 2026-01-05 00:25:31.183191 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.183202 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.183213 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.183224 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.183235 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:31.183245 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:31.183256 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:31.183266 | orchestrator | 2026-01-05 00:25:31.183277 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:25:31.183288 | orchestrator | Monday 05 January 2026 00:25:26 +0000 (0:00:00.237) 0:00:19.829 ******** 2026-01-05 00:25:31.183299 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.183309 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.183320 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.183330 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.183341 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:31.183351 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:31.183362 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:31.183372 | orchestrator | 2026-01-05 00:25:31.183383 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:25:31.183394 | orchestrator | Monday 05 January 2026 00:25:27 +0000 (0:00:00.237) 0:00:20.067 ******** 2026-01-05 00:25:31.183405 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.183415 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.183426 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.183442 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.183460 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:31.183478 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:31.183495 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:31.183514 | orchestrator | 2026-01-05 00:25:31.183535 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:25:31.183555 | orchestrator | Monday 05 January 2026 00:25:27 +0000 (0:00:00.219) 0:00:20.286 ******** 2026-01-05 00:25:31.183569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:25:31.183582 | orchestrator | 2026-01-05 00:25:31.183593 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:25:31.183613 | orchestrator | Monday 05 January 2026 00:25:27 +0000 (0:00:00.335) 0:00:20.622 ******** 2026-01-05 00:25:31.183647 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.183659 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.183670 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.183681 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.183692 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:31.183702 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:31.183713 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:31.183724 | orchestrator | 2026-01-05 00:25:31.183735 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:25:31.183746 | orchestrator | Monday 05 January 2026 00:25:28 +0000 (0:00:00.538) 0:00:21.160 ******** 2026-01-05 00:25:31.183756 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:25:31.183767 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:31.183778 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:31.183789 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:31.183800 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:31.183810 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:31.183821 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:31.183832 | orchestrator | 2026-01-05 00:25:31.183843 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:25:31.183853 | orchestrator | Monday 05 January 2026 00:25:28 +0000 (0:00:00.232) 0:00:21.393 ******** 2026-01-05 00:25:31.183864 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.183875 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.183886 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.183896 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.183907 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:31.183918 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:31.183929 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:31.183940 | orchestrator | 2026-01-05 00:25:31.183951 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:25:31.183962 | orchestrator | Monday 05 January 2026 00:25:29 +0000 (0:00:01.004) 0:00:22.397 ******** 2026-01-05 00:25:31.183973 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.183984 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.183994 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.184005 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.184016 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:31.184027 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:31.184038 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:31.184048 | orchestrator | 2026-01-05 00:25:31.184059 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:25:31.184070 | orchestrator | Monday 05 January 2026 00:25:30 +0000 (0:00:00.565) 0:00:22.962 ******** 2026-01-05 00:25:31.184081 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:31.184092 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:31.184102 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:31.184113 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:31.184132 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:13.264771 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:13.264926 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:13.264944 | orchestrator | 2026-01-05 00:26:13.264957 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:26:13.264970 | orchestrator | Monday 05 January 2026 00:25:31 +0000 (0:00:01.117) 0:00:24.080 ******** 2026-01-05 00:26:13.264981 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.264993 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.265004 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.265015 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:13.265027 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:13.265037 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:13.265048 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:13.265059 | orchestrator | 2026-01-05 00:26:13.265071 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-05 00:26:13.265118 | orchestrator | Monday 05 January 2026 00:25:46 +0000 (0:00:15.397) 0:00:39.477 ******** 2026-01-05 00:26:13.265137 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.265154 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.265171 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.265187 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.265203 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.265220 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.265238 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.265257 | orchestrator | 2026-01-05 00:26:13.265276 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-05 00:26:13.265294 | orchestrator | Monday 05 January 2026 00:25:46 +0000 (0:00:00.222) 0:00:39.700 ******** 2026-01-05 00:26:13.265314 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.265332 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.265351 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.265364 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.265377 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.265388 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.265400 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.265413 | orchestrator | 2026-01-05 00:26:13.265425 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-05 00:26:13.265437 | orchestrator | Monday 05 January 2026 00:25:47 +0000 (0:00:00.227) 0:00:39.928 ******** 2026-01-05 00:26:13.265449 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.265461 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.265474 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.265487 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.265499 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.265511 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.265523 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.265535 | orchestrator | 2026-01-05 00:26:13.265549 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-05 00:26:13.265561 | orchestrator | Monday 05 January 2026 00:25:47 +0000 (0:00:00.248) 0:00:40.176 ******** 2026-01-05 00:26:13.265576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:26:13.265589 | orchestrator | 2026-01-05 00:26:13.265600 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-05 00:26:13.265611 | orchestrator | Monday 05 January 2026 00:25:47 +0000 (0:00:00.337) 0:00:40.514 ******** 2026-01-05 00:26:13.265621 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.265632 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.265642 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.265653 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.265694 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.265705 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.265716 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.265727 | orchestrator | 2026-01-05 00:26:13.265738 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-05 00:26:13.265749 | orchestrator | Monday 05 January 2026 00:25:49 +0000 (0:00:01.605) 0:00:42.120 ******** 2026-01-05 00:26:13.265760 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:13.265771 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:13.265782 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:13.265793 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:13.265804 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:13.265815 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:13.265826 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:13.265837 | orchestrator | 2026-01-05 00:26:13.265848 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-05 00:26:13.265859 | orchestrator | Monday 05 January 2026 00:25:50 +0000 (0:00:01.038) 0:00:43.158 ******** 2026-01-05 00:26:13.265881 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.265892 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.265903 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.265914 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.265925 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.265936 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.265947 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.265958 | orchestrator | 2026-01-05 00:26:13.265969 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-05 00:26:13.265980 | orchestrator | Monday 05 January 2026 00:25:51 +0000 (0:00:00.824) 0:00:43.983 ******** 2026-01-05 00:26:13.266000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:26:13.266078 | orchestrator | 2026-01-05 00:26:13.266093 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-05 00:26:13.266105 | orchestrator | Monday 05 January 2026 00:25:51 +0000 (0:00:00.337) 0:00:44.320 ******** 2026-01-05 00:26:13.266116 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:13.266127 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:13.266138 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:13.266149 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:13.266160 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:13.266171 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:13.266182 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:13.266193 | orchestrator | 2026-01-05 00:26:13.266225 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-05 00:26:13.266237 | orchestrator | Monday 05 January 2026 00:25:52 +0000 (0:00:01.119) 0:00:45.440 ******** 2026-01-05 00:26:13.266248 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:26:13.266258 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:26:13.266269 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:26:13.266280 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:26:13.266291 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:26:13.266302 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:26:13.266312 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:26:13.266323 | orchestrator | 2026-01-05 00:26:13.266334 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-05 00:26:13.266345 | orchestrator | Monday 05 January 2026 00:25:52 +0000 (0:00:00.304) 0:00:45.744 ******** 2026-01-05 00:26:13.266356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:26:13.266368 | orchestrator | 2026-01-05 00:26:13.266379 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-05 00:26:13.266389 | orchestrator | Monday 05 January 2026 00:25:53 +0000 (0:00:00.374) 0:00:46.119 ******** 2026-01-05 00:26:13.266400 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.266411 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.266422 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.266433 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.266443 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.266454 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.266465 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.266475 | orchestrator | 2026-01-05 00:26:13.266486 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-05 00:26:13.266497 | orchestrator | Monday 05 January 2026 00:25:54 +0000 (0:00:01.609) 0:00:47.728 ******** 2026-01-05 00:26:13.266508 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:13.266519 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:13.266530 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:13.266541 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:13.266561 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:13.266571 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:13.266582 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:13.266593 | orchestrator | 2026-01-05 00:26:13.266604 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-05 00:26:13.266615 | orchestrator | Monday 05 January 2026 00:25:55 +0000 (0:00:01.161) 0:00:48.890 ******** 2026-01-05 00:26:13.266626 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:13.266637 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:13.266647 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:13.266679 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:13.266691 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:13.266702 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:13.266713 | orchestrator | changed: [testbed-manager] 2026-01-05 00:26:13.266724 | orchestrator | 2026-01-05 00:26:13.266735 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-05 00:26:13.266746 | orchestrator | Monday 05 January 2026 00:26:09 +0000 (0:00:13.218) 0:01:02.109 ******** 2026-01-05 00:26:13.266756 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.266767 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.266778 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.266789 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.266800 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.266810 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.266821 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.266831 | orchestrator | 2026-01-05 00:26:13.266842 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-05 00:26:13.266853 | orchestrator | Monday 05 January 2026 00:26:10 +0000 (0:00:01.479) 0:01:03.588 ******** 2026-01-05 00:26:13.266865 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.266875 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.266886 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.266896 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.266907 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.266917 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.266928 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.266939 | orchestrator | 2026-01-05 00:26:13.266949 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-05 00:26:13.266960 | orchestrator | Monday 05 January 2026 00:26:12 +0000 (0:00:01.776) 0:01:05.364 ******** 2026-01-05 00:26:13.266971 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.266982 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.266992 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.267003 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.267013 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.267024 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.267034 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.267045 | orchestrator | 2026-01-05 00:26:13.267056 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-05 00:26:13.267067 | orchestrator | Monday 05 January 2026 00:26:12 +0000 (0:00:00.259) 0:01:05.624 ******** 2026-01-05 00:26:13.267078 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:13.267095 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:13.267106 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:13.267116 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:13.267127 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:13.267138 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:13.267148 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:13.267159 | orchestrator | 2026-01-05 00:26:13.267170 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-05 00:26:13.267180 | orchestrator | Monday 05 January 2026 00:26:12 +0000 (0:00:00.227) 0:01:05.851 ******** 2026-01-05 00:26:13.267192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:26:13.267210 | orchestrator | 2026-01-05 00:26:13.267230 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-05 00:28:29.991430 | orchestrator | Monday 05 January 2026 00:26:13 +0000 (0:00:00.315) 0:01:06.167 ******** 2026-01-05 00:28:29.991552 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:29.991564 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:29.991572 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:29.991578 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:29.991586 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:29.991591 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:29.991598 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:29.991604 | orchestrator | 2026-01-05 00:28:29.991611 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-05 00:28:29.991618 | orchestrator | Monday 05 January 2026 00:26:14 +0000 (0:00:01.603) 0:01:07.771 ******** 2026-01-05 00:28:29.991658 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:29.991667 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:29.991673 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:29.991680 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:29.991687 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:29.991694 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:29.991700 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:29.991707 | orchestrator | 2026-01-05 00:28:29.991713 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-05 00:28:29.991722 | orchestrator | Monday 05 January 2026 00:26:15 +0000 (0:00:00.554) 0:01:08.325 ******** 2026-01-05 00:28:29.991729 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:29.991735 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:29.991741 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:29.991747 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:29.991754 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:29.991782 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:29.991789 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:29.991795 | orchestrator | 2026-01-05 00:28:29.991801 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-05 00:28:29.991807 | orchestrator | Monday 05 January 2026 00:26:15 +0000 (0:00:00.275) 0:01:08.601 ******** 2026-01-05 00:28:29.991813 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:29.991820 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:29.991827 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:29.991834 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:29.991841 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:29.991848 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:29.991855 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:29.991862 | orchestrator | 2026-01-05 00:28:29.991869 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-05 00:28:29.991877 | orchestrator | Monday 05 January 2026 00:26:16 +0000 (0:00:01.207) 0:01:09.809 ******** 2026-01-05 00:28:29.991884 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:29.991891 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:29.991898 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:29.991905 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:29.991912 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:29.991923 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:29.991930 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:29.991938 | orchestrator | 2026-01-05 00:28:29.991945 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-05 00:28:29.991953 | orchestrator | Monday 05 January 2026 00:26:18 +0000 (0:00:01.534) 0:01:11.343 ******** 2026-01-05 00:28:29.991960 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:29.991969 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:29.991977 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:29.991985 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:29.991992 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:29.992023 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:29.992030 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:29.992036 | orchestrator | 2026-01-05 00:28:29.992042 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-05 00:28:29.992049 | orchestrator | Monday 05 January 2026 00:26:20 +0000 (0:00:02.351) 0:01:13.695 ******** 2026-01-05 00:28:29.992054 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:29.992060 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:29.992066 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:29.992072 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:29.992079 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:29.992086 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:29.992093 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:29.992100 | orchestrator | 2026-01-05 00:28:29.992106 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-05 00:28:29.992112 | orchestrator | Monday 05 January 2026 00:26:57 +0000 (0:00:36.392) 0:01:50.087 ******** 2026-01-05 00:28:29.992118 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:29.992125 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:29.992130 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:29.992136 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:29.992143 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:29.992148 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:29.992154 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:29.992161 | orchestrator | 2026-01-05 00:28:29.992167 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-05 00:28:29.992173 | orchestrator | Monday 05 January 2026 00:28:13 +0000 (0:01:16.674) 0:03:06.762 ******** 2026-01-05 00:28:29.992179 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:29.992185 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:29.992191 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:29.992198 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:29.992204 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:29.992210 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:29.992216 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:29.992222 | orchestrator | 2026-01-05 00:28:29.992229 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-05 00:28:29.992235 | orchestrator | Monday 05 January 2026 00:28:15 +0000 (0:00:01.807) 0:03:08.569 ******** 2026-01-05 00:28:29.992241 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:29.992247 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:29.992253 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:29.992259 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:29.992265 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:29.992271 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:29.992277 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:29.992284 | orchestrator | 2026-01-05 00:28:29.992290 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-05 00:28:29.992296 | orchestrator | Monday 05 January 2026 00:28:28 +0000 (0:00:13.037) 0:03:21.607 ******** 2026-01-05 00:28:29.992340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-05 00:28:29.992365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-05 00:28:29.992383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-05 00:28:29.992390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-05 00:28:29.992397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-05 00:28:29.992405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-05 00:28:29.992412 | orchestrator | 2026-01-05 00:28:29.992419 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-05 00:28:29.992426 | orchestrator | Monday 05 January 2026 00:28:29 +0000 (0:00:00.445) 0:03:22.053 ******** 2026-01-05 00:28:29.992432 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:28:29.992439 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:28:29.992445 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:29.992450 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:29.992456 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:28:29.992461 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:29.992466 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:28:29.992471 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:29.992478 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:28:29.992484 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:28:29.992491 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:28:29.992497 | orchestrator | 2026-01-05 00:28:29.992506 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-05 00:28:29.992513 | orchestrator | Monday 05 January 2026 00:28:29 +0000 (0:00:00.746) 0:03:22.799 ******** 2026-01-05 00:28:29.992520 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:28:29.992528 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:28:29.992535 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:28:29.992539 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:28:29.992543 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:28:29.992554 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:28:34.674493 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:28:34.674632 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:28:34.674648 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:28:34.674659 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:28:34.674670 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:28:34.674681 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:28:34.674692 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:28:34.674703 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:28:34.674713 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:28:34.674724 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:28:34.674735 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:28:34.674746 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:28:34.674756 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:28:34.674799 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:28:34.674811 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:28:34.674822 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:28:34.674833 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:34.674846 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:28:34.674857 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:28:34.674868 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:28:34.674879 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:28:34.674889 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:28:34.674900 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:28:34.674910 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:28:34.674921 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:28:34.674932 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:28:34.674942 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:28:34.674953 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:28:34.674963 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:28:34.674974 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:28:34.674985 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:28:34.674995 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:34.675006 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:28:34.675018 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:28:34.675050 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:28:34.675091 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:28:34.675111 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:34.675129 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:34.675147 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:28:34.675165 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:28:34.675183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:28:34.675202 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:28:34.675220 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:28:34.675261 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:28:34.675280 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:28:34.675298 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:28:34.675316 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:28:34.675334 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:28:34.675352 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:28:34.675370 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:28:34.675388 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:28:34.675404 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:28:34.675421 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:28:34.675439 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:28:34.675457 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:28:34.675475 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:28:34.675494 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:28:34.675513 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:28:34.675532 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:28:34.675550 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:28:34.675569 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:28:34.675587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:28:34.675605 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:28:34.675624 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:28:34.675636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:28:34.675647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:28:34.675658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:28:34.675680 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:28:34.675691 | orchestrator | 2026-01-05 00:28:34.675703 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-05 00:28:34.675714 | orchestrator | Monday 05 January 2026 00:28:33 +0000 (0:00:03.689) 0:03:26.489 ******** 2026-01-05 00:28:34.675724 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:34.675735 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:34.675749 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:34.675798 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:34.675816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:34.675834 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:34.675852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:28:34.675868 | orchestrator | 2026-01-05 00:28:34.675885 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-05 00:28:34.675903 | orchestrator | Monday 05 January 2026 00:28:34 +0000 (0:00:00.576) 0:03:27.066 ******** 2026-01-05 00:28:34.675931 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:34.675950 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:34.675967 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:34.675985 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:34.676003 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:28:34.676021 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:28:34.676039 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:34.676057 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:28:34.676075 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:28:34.676093 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:28:34.676130 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:28:47.977089 | orchestrator | 2026-01-05 00:28:47.977214 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-05 00:28:47.977230 | orchestrator | Monday 05 January 2026 00:28:34 +0000 (0:00:00.507) 0:03:27.573 ******** 2026-01-05 00:28:47.977242 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:47.977254 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:47.977267 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:47.977279 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:47.977290 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:47.977301 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:47.977311 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:28:47.977322 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:47.977333 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:28:47.977344 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:28:47.977354 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:28:47.977397 | orchestrator | 2026-01-05 00:28:47.977419 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-05 00:28:47.977438 | orchestrator | Monday 05 January 2026 00:28:35 +0000 (0:00:00.594) 0:03:28.168 ******** 2026-01-05 00:28:47.977458 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:28:47.977478 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:47.977499 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:28:47.977518 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:28:47.977539 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:28:47.977559 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:28:47.977581 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:28:47.977603 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:28:47.977626 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:28:47.977649 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:28:47.977672 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:28:47.977686 | orchestrator | 2026-01-05 00:28:47.977699 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-05 00:28:47.977712 | orchestrator | Monday 05 January 2026 00:28:35 +0000 (0:00:00.610) 0:03:28.779 ******** 2026-01-05 00:28:47.977725 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:47.977737 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:47.977749 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:47.977763 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:47.977808 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:28:47.977821 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:28:47.977833 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:28:47.977845 | orchestrator | 2026-01-05 00:28:47.977858 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-05 00:28:47.977871 | orchestrator | Monday 05 January 2026 00:28:36 +0000 (0:00:00.310) 0:03:29.089 ******** 2026-01-05 00:28:47.977884 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:47.977897 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:47.977909 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:47.977921 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:47.977935 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:47.977947 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:47.977958 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:47.977968 | orchestrator | 2026-01-05 00:28:47.977979 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-05 00:28:47.977990 | orchestrator | Monday 05 January 2026 00:28:41 +0000 (0:00:05.726) 0:03:34.816 ******** 2026-01-05 00:28:47.978001 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-05 00:28:47.978077 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:47.978091 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-05 00:28:47.978102 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-05 00:28:47.978113 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:47.978125 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-05 00:28:47.978137 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:47.978148 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-05 00:28:47.978159 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:47.978170 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-05 00:28:47.978204 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:28:47.978216 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:28:47.978240 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-05 00:28:47.978252 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:28:47.978263 | orchestrator | 2026-01-05 00:28:47.978279 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-05 00:28:47.978298 | orchestrator | Monday 05 January 2026 00:28:42 +0000 (0:00:00.337) 0:03:35.154 ******** 2026-01-05 00:28:47.978319 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-05 00:28:47.978338 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-05 00:28:47.978358 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-05 00:28:47.978401 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-05 00:28:47.978423 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-05 00:28:47.978443 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-05 00:28:47.978465 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-05 00:28:47.978485 | orchestrator | 2026-01-05 00:28:47.978500 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-05 00:28:47.978511 | orchestrator | Monday 05 January 2026 00:28:43 +0000 (0:00:01.112) 0:03:36.267 ******** 2026-01-05 00:28:47.978525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:28:47.978539 | orchestrator | 2026-01-05 00:28:47.978550 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-05 00:28:47.978561 | orchestrator | Monday 05 January 2026 00:28:43 +0000 (0:00:00.482) 0:03:36.750 ******** 2026-01-05 00:28:47.978572 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:47.978583 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:47.978593 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:47.978604 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:47.978622 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:47.978639 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:47.978657 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:47.978674 | orchestrator | 2026-01-05 00:28:47.978692 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-05 00:28:47.978711 | orchestrator | Monday 05 January 2026 00:28:45 +0000 (0:00:01.237) 0:03:37.987 ******** 2026-01-05 00:28:47.978730 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:47.978749 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:47.978820 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:47.978835 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:47.978846 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:47.978857 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:47.978868 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:47.978878 | orchestrator | 2026-01-05 00:28:47.978889 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-05 00:28:47.978900 | orchestrator | Monday 05 January 2026 00:28:45 +0000 (0:00:00.605) 0:03:38.592 ******** 2026-01-05 00:28:47.978911 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:47.978922 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:47.978933 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:47.978944 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:47.978954 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:47.978965 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:47.978975 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:47.978986 | orchestrator | 2026-01-05 00:28:47.978997 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-05 00:28:47.979008 | orchestrator | Monday 05 January 2026 00:28:46 +0000 (0:00:00.622) 0:03:39.215 ******** 2026-01-05 00:28:47.979019 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:47.979029 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:47.979040 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:47.979051 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:47.979061 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:47.979082 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:47.979093 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:47.979104 | orchestrator | 2026-01-05 00:28:47.979114 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-05 00:28:47.979125 | orchestrator | Monday 05 January 2026 00:28:46 +0000 (0:00:00.603) 0:03:39.819 ******** 2026-01-05 00:28:47.979140 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571502.9152625, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:47.979163 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571528.5751889, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:47.979176 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571516.423838, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:47.979199 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571517.1822717, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878188 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571523.0076544, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878323 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571530.40457, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878340 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571525.427023, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878382 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878394 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878421 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878433 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878476 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878489 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878500 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:28:52.878520 | orchestrator | 2026-01-05 00:28:52.878533 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-05 00:28:52.878547 | orchestrator | Monday 05 January 2026 00:28:47 +0000 (0:00:01.053) 0:03:40.873 ******** 2026-01-05 00:28:52.878558 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:52.878571 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:52.878582 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:52.878593 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:52.878604 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:52.878614 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:52.878628 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:52.878640 | orchestrator | 2026-01-05 00:28:52.878654 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-05 00:28:52.878667 | orchestrator | Monday 05 January 2026 00:28:49 +0000 (0:00:01.075) 0:03:41.948 ******** 2026-01-05 00:28:52.878679 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:52.878691 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:52.878703 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:52.878715 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:52.878729 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:52.878741 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:52.878755 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:52.878767 | orchestrator | 2026-01-05 00:28:52.878814 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-05 00:28:52.878827 | orchestrator | Monday 05 January 2026 00:28:50 +0000 (0:00:01.169) 0:03:43.118 ******** 2026-01-05 00:28:52.878840 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:52.878852 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:52.878864 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:52.878877 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:52.878889 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:52.878901 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:52.878914 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:52.878926 | orchestrator | 2026-01-05 00:28:52.878944 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-05 00:28:52.878958 | orchestrator | Monday 05 January 2026 00:28:51 +0000 (0:00:01.160) 0:03:44.278 ******** 2026-01-05 00:28:52.878971 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:52.878982 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:52.878993 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:52.879004 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:52.879014 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:28:52.879025 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:28:52.879035 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:28:52.879046 | orchestrator | 2026-01-05 00:28:52.879057 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-05 00:28:52.879068 | orchestrator | Monday 05 January 2026 00:28:51 +0000 (0:00:00.278) 0:03:44.557 ******** 2026-01-05 00:28:52.879079 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:52.879101 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:52.879120 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:52.879139 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:52.879158 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:52.879176 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:52.879194 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:52.879212 | orchestrator | 2026-01-05 00:28:52.879232 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-05 00:28:52.879264 | orchestrator | Monday 05 January 2026 00:28:52 +0000 (0:00:00.757) 0:03:45.314 ******** 2026-01-05 00:28:52.879286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:28:52.879310 | orchestrator | 2026-01-05 00:28:52.879330 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-05 00:28:52.879363 | orchestrator | Monday 05 January 2026 00:28:52 +0000 (0:00:00.465) 0:03:45.779 ******** 2026-01-05 00:30:10.057791 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:10.058142 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:10.058180 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:10.058203 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:10.058223 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:10.058243 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:10.058263 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:10.058284 | orchestrator | 2026-01-05 00:30:10.058307 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-05 00:30:10.058331 | orchestrator | Monday 05 January 2026 00:29:01 +0000 (0:00:08.523) 0:03:54.303 ******** 2026-01-05 00:30:10.058353 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:10.058373 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:10.058393 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:10.058414 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:10.058431 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:10.058449 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:10.058466 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:10.058483 | orchestrator | 2026-01-05 00:30:10.058501 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-05 00:30:10.058518 | orchestrator | Monday 05 January 2026 00:29:02 +0000 (0:00:01.201) 0:03:55.505 ******** 2026-01-05 00:30:10.058536 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:10.058554 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:10.058573 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:10.058590 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:10.058606 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:10.058624 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:10.058642 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:10.058659 | orchestrator | 2026-01-05 00:30:10.058676 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-05 00:30:10.058690 | orchestrator | Monday 05 January 2026 00:29:03 +0000 (0:00:01.086) 0:03:56.591 ******** 2026-01-05 00:30:10.058704 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:10.058719 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:10.058735 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:10.058752 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:10.058768 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:10.058785 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:10.058802 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:10.058818 | orchestrator | 2026-01-05 00:30:10.058869 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-05 00:30:10.058886 | orchestrator | Monday 05 January 2026 00:29:03 +0000 (0:00:00.300) 0:03:56.891 ******** 2026-01-05 00:30:10.058901 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:10.058916 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:10.058932 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:10.058948 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:10.058963 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:10.058979 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:10.058994 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:10.059009 | orchestrator | 2026-01-05 00:30:10.059024 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-05 00:30:10.059041 | orchestrator | Monday 05 January 2026 00:29:04 +0000 (0:00:00.357) 0:03:57.249 ******** 2026-01-05 00:30:10.059088 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:10.059105 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:10.059121 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:10.059136 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:10.059152 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:10.059167 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:10.059182 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:10.059197 | orchestrator | 2026-01-05 00:30:10.059213 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-05 00:30:10.059228 | orchestrator | Monday 05 January 2026 00:29:04 +0000 (0:00:00.368) 0:03:57.618 ******** 2026-01-05 00:30:10.059244 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:10.059260 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:10.059275 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:10.059289 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:10.059304 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:10.059319 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:10.059334 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:10.059349 | orchestrator | 2026-01-05 00:30:10.059365 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-05 00:30:10.059381 | orchestrator | Monday 05 January 2026 00:29:10 +0000 (0:00:05.657) 0:04:03.275 ******** 2026-01-05 00:30:10.059398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:10.059416 | orchestrator | 2026-01-05 00:30:10.059432 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-05 00:30:10.059447 | orchestrator | Monday 05 January 2026 00:29:10 +0000 (0:00:00.451) 0:04:03.727 ******** 2026-01-05 00:30:10.059463 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-05 00:30:10.059479 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-05 00:30:10.059494 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-05 00:30:10.059509 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:10.059524 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-05 00:30:10.059540 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-05 00:30:10.059573 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-05 00:30:10.059589 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:10.059605 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-05 00:30:10.059621 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:10.059637 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-05 00:30:10.059652 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-05 00:30:10.059667 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-05 00:30:10.059682 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:10.059695 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-05 00:30:10.059708 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-05 00:30:10.059748 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:10.059765 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:10.059780 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-05 00:30:10.059795 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-05 00:30:10.059810 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:10.059891 | orchestrator | 2026-01-05 00:30:10.059910 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-05 00:30:10.059926 | orchestrator | Monday 05 January 2026 00:29:11 +0000 (0:00:00.401) 0:04:04.128 ******** 2026-01-05 00:30:10.059942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:10.059974 | orchestrator | 2026-01-05 00:30:10.059990 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-05 00:30:10.060006 | orchestrator | Monday 05 January 2026 00:29:11 +0000 (0:00:00.443) 0:04:04.572 ******** 2026-01-05 00:30:10.060021 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-05 00:30:10.060037 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-05 00:30:10.060052 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:10.060067 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-05 00:30:10.060080 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:10.060095 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-05 00:30:10.060110 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:10.060126 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-05 00:30:10.060141 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:10.060157 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-05 00:30:10.060171 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:10.060186 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:10.060202 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-05 00:30:10.060218 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:10.060234 | orchestrator | 2026-01-05 00:30:10.060249 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-05 00:30:10.060264 | orchestrator | Monday 05 January 2026 00:29:12 +0000 (0:00:00.378) 0:04:04.951 ******** 2026-01-05 00:30:10.060279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:10.060294 | orchestrator | 2026-01-05 00:30:10.060309 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-05 00:30:10.060325 | orchestrator | Monday 05 January 2026 00:29:12 +0000 (0:00:00.457) 0:04:05.409 ******** 2026-01-05 00:30:10.060340 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:10.060355 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:10.060370 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:10.060385 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:10.060400 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:10.060415 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:10.060431 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:10.060446 | orchestrator | 2026-01-05 00:30:10.060461 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-05 00:30:10.060476 | orchestrator | Monday 05 January 2026 00:29:46 +0000 (0:00:34.438) 0:04:39.848 ******** 2026-01-05 00:30:10.060492 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:10.060507 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:10.060520 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:10.060535 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:10.060551 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:10.060575 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:10.060591 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:10.060606 | orchestrator | 2026-01-05 00:30:10.060621 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-05 00:30:10.060636 | orchestrator | Monday 05 January 2026 00:29:54 +0000 (0:00:07.972) 0:04:47.820 ******** 2026-01-05 00:30:10.060651 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:10.060666 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:10.060680 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:10.060692 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:10.060705 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:10.060717 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:10.060741 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:10.060755 | orchestrator | 2026-01-05 00:30:10.060771 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-05 00:30:10.060787 | orchestrator | Monday 05 January 2026 00:30:02 +0000 (0:00:07.498) 0:04:55.319 ******** 2026-01-05 00:30:10.060803 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:10.060819 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:10.060857 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:10.060871 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:10.060886 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:10.060902 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:10.060918 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:10.060933 | orchestrator | 2026-01-05 00:30:10.060949 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-05 00:30:10.060965 | orchestrator | Monday 05 January 2026 00:30:04 +0000 (0:00:01.699) 0:04:57.019 ******** 2026-01-05 00:30:10.060980 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:10.060995 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:10.061010 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:10.061025 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:10.061041 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:10.061056 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:10.061071 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:10.061087 | orchestrator | 2026-01-05 00:30:10.061117 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-05 00:30:21.762997 | orchestrator | Monday 05 January 2026 00:30:10 +0000 (0:00:05.929) 0:05:02.948 ******** 2026-01-05 00:30:21.763133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:21.763153 | orchestrator | 2026-01-05 00:30:21.763165 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-05 00:30:21.763175 | orchestrator | Monday 05 January 2026 00:30:10 +0000 (0:00:00.469) 0:05:03.417 ******** 2026-01-05 00:30:21.763186 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:21.763197 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:21.763206 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:21.763216 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:21.763226 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:21.763236 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:21.763246 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:21.763256 | orchestrator | 2026-01-05 00:30:21.763266 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-05 00:30:21.763277 | orchestrator | Monday 05 January 2026 00:30:11 +0000 (0:00:00.743) 0:05:04.161 ******** 2026-01-05 00:30:21.763287 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:21.763298 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:21.763308 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:21.763317 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:21.763327 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:21.763338 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:21.763348 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:21.763358 | orchestrator | 2026-01-05 00:30:21.763369 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-05 00:30:21.763380 | orchestrator | Monday 05 January 2026 00:30:12 +0000 (0:00:01.694) 0:05:05.856 ******** 2026-01-05 00:30:21.763390 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:21.763400 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:21.763412 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:21.763423 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:21.763433 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:21.763443 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:21.763453 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:21.763464 | orchestrator | 2026-01-05 00:30:21.763506 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-05 00:30:21.763515 | orchestrator | Monday 05 January 2026 00:30:13 +0000 (0:00:00.773) 0:05:06.629 ******** 2026-01-05 00:30:21.763525 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:21.763535 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:21.763545 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:21.763555 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:21.763565 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:21.763575 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:21.763584 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:21.763595 | orchestrator | 2026-01-05 00:30:21.763605 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-05 00:30:21.763615 | orchestrator | Monday 05 January 2026 00:30:14 +0000 (0:00:00.328) 0:05:06.957 ******** 2026-01-05 00:30:21.763624 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:21.763635 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:21.763646 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:21.763657 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:21.763667 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:21.763678 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:21.763689 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:21.763700 | orchestrator | 2026-01-05 00:30:21.763712 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-05 00:30:21.763723 | orchestrator | Monday 05 January 2026 00:30:14 +0000 (0:00:00.419) 0:05:07.376 ******** 2026-01-05 00:30:21.763734 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:21.763745 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:21.763757 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:21.763768 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:21.763801 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:21.763813 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:21.763824 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:21.763889 | orchestrator | 2026-01-05 00:30:21.763901 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-05 00:30:21.763911 | orchestrator | Monday 05 January 2026 00:30:14 +0000 (0:00:00.365) 0:05:07.742 ******** 2026-01-05 00:30:21.763922 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:21.763932 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:21.763943 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:21.763953 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:21.763963 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:21.763973 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:21.763983 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:21.763994 | orchestrator | 2026-01-05 00:30:21.764004 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-05 00:30:21.764017 | orchestrator | Monday 05 January 2026 00:30:15 +0000 (0:00:00.288) 0:05:08.031 ******** 2026-01-05 00:30:21.764027 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:21.764037 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:21.764046 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:21.764058 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:21.764070 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:21.764081 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:21.764091 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:21.764102 | orchestrator | 2026-01-05 00:30:21.764113 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-05 00:30:21.764123 | orchestrator | Monday 05 January 2026 00:30:15 +0000 (0:00:00.327) 0:05:08.359 ******** 2026-01-05 00:30:21.764134 | orchestrator | ok: [testbed-manager] =>  2026-01-05 00:30:21.764145 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:21.764156 | orchestrator | ok: [testbed-node-3] =>  2026-01-05 00:30:21.764167 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:21.764177 | orchestrator | ok: [testbed-node-4] =>  2026-01-05 00:30:21.764188 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:21.764209 | orchestrator | ok: [testbed-node-5] =>  2026-01-05 00:30:21.764220 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:21.764255 | orchestrator | ok: [testbed-node-0] =>  2026-01-05 00:30:21.764266 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:21.764277 | orchestrator | ok: [testbed-node-1] =>  2026-01-05 00:30:21.764385 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:21.764399 | orchestrator | ok: [testbed-node-2] =>  2026-01-05 00:30:21.764409 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:30:21.764419 | orchestrator | 2026-01-05 00:30:21.764428 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-05 00:30:21.764437 | orchestrator | Monday 05 January 2026 00:30:15 +0000 (0:00:00.321) 0:05:08.680 ******** 2026-01-05 00:30:21.764450 | orchestrator | ok: [testbed-manager] =>  2026-01-05 00:30:21.764459 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:21.764468 | orchestrator | ok: [testbed-node-3] =>  2026-01-05 00:30:21.764478 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:21.764488 | orchestrator | ok: [testbed-node-4] =>  2026-01-05 00:30:21.764499 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:21.764509 | orchestrator | ok: [testbed-node-5] =>  2026-01-05 00:30:21.764520 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:21.764531 | orchestrator | ok: [testbed-node-0] =>  2026-01-05 00:30:21.764541 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:21.764551 | orchestrator | ok: [testbed-node-1] =>  2026-01-05 00:30:21.764561 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:21.764570 | orchestrator | ok: [testbed-node-2] =>  2026-01-05 00:30:21.764580 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:30:21.764590 | orchestrator | 2026-01-05 00:30:21.764600 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-05 00:30:21.764610 | orchestrator | Monday 05 January 2026 00:30:16 +0000 (0:00:00.364) 0:05:09.045 ******** 2026-01-05 00:30:21.764621 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:21.764631 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:21.764640 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:21.764650 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:21.764661 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:21.764670 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:21.764680 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:21.764691 | orchestrator | 2026-01-05 00:30:21.764701 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-05 00:30:21.764711 | orchestrator | Monday 05 January 2026 00:30:16 +0000 (0:00:00.307) 0:05:09.352 ******** 2026-01-05 00:30:21.764721 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:21.764731 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:21.764741 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:21.764750 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:21.764760 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:21.764770 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:21.764780 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:21.764790 | orchestrator | 2026-01-05 00:30:21.764801 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-05 00:30:21.764812 | orchestrator | Monday 05 January 2026 00:30:16 +0000 (0:00:00.345) 0:05:09.698 ******** 2026-01-05 00:30:21.764824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:30:21.764866 | orchestrator | 2026-01-05 00:30:21.764877 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-05 00:30:21.764888 | orchestrator | Monday 05 January 2026 00:30:17 +0000 (0:00:00.443) 0:05:10.142 ******** 2026-01-05 00:30:21.764899 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:21.764911 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:21.764934 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:21.764944 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:21.764955 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:21.764965 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:21.764976 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:21.764986 | orchestrator | 2026-01-05 00:30:21.764996 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-05 00:30:21.765016 | orchestrator | Monday 05 January 2026 00:30:18 +0000 (0:00:00.969) 0:05:11.111 ******** 2026-01-05 00:30:21.765026 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:21.765036 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:21.765046 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:21.765057 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:21.765067 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:21.765078 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:21.765089 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:21.765099 | orchestrator | 2026-01-05 00:30:21.765109 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-05 00:30:21.765121 | orchestrator | Monday 05 January 2026 00:30:21 +0000 (0:00:03.122) 0:05:14.234 ******** 2026-01-05 00:30:21.765132 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-05 00:30:21.765143 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-05 00:30:21.765153 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-05 00:30:21.765163 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:21.765173 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-05 00:30:21.765184 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-05 00:30:21.765194 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-05 00:30:21.765204 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:21.765213 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-05 00:30:21.765223 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-05 00:30:21.765233 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-05 00:30:21.765244 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:21.765254 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-05 00:30:21.765264 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-05 00:30:21.765274 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-05 00:30:21.765285 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:21.765309 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-05 00:31:22.303425 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-05 00:31:22.303555 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-05 00:31:22.303572 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:22.303584 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-05 00:31:22.303596 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-05 00:31:22.303607 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-05 00:31:22.303618 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:22.303629 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-05 00:31:22.303640 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-05 00:31:22.303651 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-05 00:31:22.303662 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:22.303673 | orchestrator | 2026-01-05 00:31:22.303685 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-05 00:31:22.303697 | orchestrator | Monday 05 January 2026 00:30:21 +0000 (0:00:00.663) 0:05:14.897 ******** 2026-01-05 00:31:22.303708 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:22.303724 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.303744 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.303762 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.303813 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.303833 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.303852 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.303897 | orchestrator | 2026-01-05 00:31:22.303918 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-05 00:31:22.303939 | orchestrator | Monday 05 January 2026 00:30:28 +0000 (0:00:06.524) 0:05:21.422 ******** 2026-01-05 00:31:22.303960 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.303981 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.304003 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.304023 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:22.304043 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.304062 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.304077 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.304090 | orchestrator | 2026-01-05 00:31:22.304103 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-05 00:31:22.304115 | orchestrator | Monday 05 January 2026 00:30:29 +0000 (0:00:01.089) 0:05:22.511 ******** 2026-01-05 00:31:22.304128 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:22.304139 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.304150 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.304161 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.304171 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.304182 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.304192 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.304203 | orchestrator | 2026-01-05 00:31:22.304214 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-05 00:31:22.304225 | orchestrator | Monday 05 January 2026 00:30:37 +0000 (0:00:08.390) 0:05:30.901 ******** 2026-01-05 00:31:22.304236 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.304246 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.304257 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:22.304268 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.304278 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.304289 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.304300 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.304310 | orchestrator | 2026-01-05 00:31:22.304321 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-05 00:31:22.304332 | orchestrator | Monday 05 January 2026 00:30:41 +0000 (0:00:03.323) 0:05:34.225 ******** 2026-01-05 00:31:22.304343 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:22.304354 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.304365 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.304383 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.304401 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.304419 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.304436 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.304452 | orchestrator | 2026-01-05 00:31:22.304471 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-05 00:31:22.304516 | orchestrator | Monday 05 January 2026 00:30:42 +0000 (0:00:01.375) 0:05:35.601 ******** 2026-01-05 00:31:22.304537 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:22.304549 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.304560 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.304571 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.304582 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.304593 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.304604 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.304614 | orchestrator | 2026-01-05 00:31:22.304625 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-05 00:31:22.304636 | orchestrator | Monday 05 January 2026 00:30:44 +0000 (0:00:01.608) 0:05:37.209 ******** 2026-01-05 00:31:22.304647 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:22.304671 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:22.304682 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:22.304693 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:22.304703 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:22.304714 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:22.304724 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:22.304735 | orchestrator | 2026-01-05 00:31:22.304746 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-05 00:31:22.304757 | orchestrator | Monday 05 January 2026 00:30:44 +0000 (0:00:00.631) 0:05:37.841 ******** 2026-01-05 00:31:22.304767 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:22.304778 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.304789 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.304799 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.304810 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.304821 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.304831 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.304842 | orchestrator | 2026-01-05 00:31:22.304853 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-05 00:31:22.304968 | orchestrator | Monday 05 January 2026 00:30:54 +0000 (0:00:09.875) 0:05:47.717 ******** 2026-01-05 00:31:22.304991 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:22.305010 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.305029 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.305049 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.305065 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.305083 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.305102 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.305119 | orchestrator | 2026-01-05 00:31:22.305137 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-05 00:31:22.305156 | orchestrator | Monday 05 January 2026 00:30:55 +0000 (0:00:00.941) 0:05:48.659 ******** 2026-01-05 00:31:22.305175 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:22.305194 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.305214 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.305232 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.305250 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.305269 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.305288 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.305307 | orchestrator | 2026-01-05 00:31:22.305326 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-05 00:31:22.305345 | orchestrator | Monday 05 January 2026 00:31:04 +0000 (0:00:08.815) 0:05:57.475 ******** 2026-01-05 00:31:22.305364 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:22.305411 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.305430 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.305446 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.305461 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.305478 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.305495 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.305512 | orchestrator | 2026-01-05 00:31:22.305529 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-05 00:31:22.305545 | orchestrator | Monday 05 January 2026 00:31:15 +0000 (0:00:10.952) 0:06:08.427 ******** 2026-01-05 00:31:22.305563 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-05 00:31:22.305582 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-05 00:31:22.305601 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-05 00:31:22.305616 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-05 00:31:22.305631 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-05 00:31:22.305648 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-05 00:31:22.305666 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-05 00:31:22.305701 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-05 00:31:22.305718 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-05 00:31:22.305734 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-05 00:31:22.305751 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-05 00:31:22.305769 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-05 00:31:22.305861 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-05 00:31:22.305914 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-05 00:31:22.305932 | orchestrator | 2026-01-05 00:31:22.305973 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-05 00:31:22.305992 | orchestrator | Monday 05 January 2026 00:31:16 +0000 (0:00:01.248) 0:06:09.676 ******** 2026-01-05 00:31:22.306009 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:22.306130 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:22.306148 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:22.306166 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:22.306183 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:22.306200 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:22.306218 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:22.306234 | orchestrator | 2026-01-05 00:31:22.306252 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-05 00:31:22.306269 | orchestrator | Monday 05 January 2026 00:31:17 +0000 (0:00:00.594) 0:06:10.270 ******** 2026-01-05 00:31:22.306286 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:22.306314 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:22.306332 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:22.306349 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:22.306368 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:22.306386 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:22.306404 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:22.306422 | orchestrator | 2026-01-05 00:31:22.306441 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-05 00:31:22.306461 | orchestrator | Monday 05 January 2026 00:31:21 +0000 (0:00:03.944) 0:06:14.214 ******** 2026-01-05 00:31:22.306479 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:22.306498 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:22.306517 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:22.306535 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:22.306556 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:22.306575 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:22.306594 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:22.306615 | orchestrator | 2026-01-05 00:31:22.306636 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-05 00:31:22.306655 | orchestrator | Monday 05 January 2026 00:31:21 +0000 (0:00:00.519) 0:06:14.733 ******** 2026-01-05 00:31:22.306675 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-05 00:31:22.306695 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-05 00:31:22.306714 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:22.306735 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-05 00:31:22.306778 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-05 00:31:22.306796 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:22.306812 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-05 00:31:22.306829 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-05 00:31:22.306897 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:22.306944 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-05 00:31:41.340752 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-05 00:31:41.340846 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:41.340855 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-05 00:31:41.340940 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-05 00:31:41.340946 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:41.340950 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-05 00:31:41.340954 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-05 00:31:41.340958 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:41.340962 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-05 00:31:41.340966 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-05 00:31:41.340970 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:41.340974 | orchestrator | 2026-01-05 00:31:41.340979 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-05 00:31:41.340984 | orchestrator | Monday 05 January 2026 00:31:22 +0000 (0:00:00.704) 0:06:15.438 ******** 2026-01-05 00:31:41.340988 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:41.340992 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:41.340996 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:41.341000 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:41.341004 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:41.341007 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:41.341011 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:41.341015 | orchestrator | 2026-01-05 00:31:41.341019 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-05 00:31:41.341023 | orchestrator | Monday 05 January 2026 00:31:22 +0000 (0:00:00.444) 0:06:15.882 ******** 2026-01-05 00:31:41.341027 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:41.341031 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:41.341034 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:41.341038 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:41.341042 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:41.341045 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:41.341049 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:41.341053 | orchestrator | 2026-01-05 00:31:41.341057 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-05 00:31:41.341060 | orchestrator | Monday 05 January 2026 00:31:23 +0000 (0:00:00.497) 0:06:16.380 ******** 2026-01-05 00:31:41.341064 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:41.341068 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:41.341071 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:41.341075 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:41.341079 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:41.341082 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:41.341086 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:41.341090 | orchestrator | 2026-01-05 00:31:41.341094 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-05 00:31:41.341097 | orchestrator | Monday 05 January 2026 00:31:23 +0000 (0:00:00.490) 0:06:16.870 ******** 2026-01-05 00:31:41.341101 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:41.341105 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:41.341111 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:41.341118 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:41.341124 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:41.341129 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:41.341139 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:41.341146 | orchestrator | 2026-01-05 00:31:41.341154 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-05 00:31:41.341160 | orchestrator | Monday 05 January 2026 00:31:25 +0000 (0:00:01.710) 0:06:18.581 ******** 2026-01-05 00:31:41.341169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:31:41.341188 | orchestrator | 2026-01-05 00:31:41.341195 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-05 00:31:41.341202 | orchestrator | Monday 05 January 2026 00:31:26 +0000 (0:00:00.921) 0:06:19.502 ******** 2026-01-05 00:31:41.341208 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:41.341215 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:41.341221 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:41.341228 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:41.341234 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:41.341240 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:41.341247 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:41.341253 | orchestrator | 2026-01-05 00:31:41.341260 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-05 00:31:41.341268 | orchestrator | Monday 05 January 2026 00:31:27 +0000 (0:00:00.837) 0:06:20.340 ******** 2026-01-05 00:31:41.341274 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:41.341281 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:41.341288 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:41.341295 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:41.341302 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:41.341308 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:41.341315 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:41.341322 | orchestrator | 2026-01-05 00:31:41.341330 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-05 00:31:41.341338 | orchestrator | Monday 05 January 2026 00:31:28 +0000 (0:00:00.867) 0:06:21.208 ******** 2026-01-05 00:31:41.341345 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:41.341353 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:41.341362 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:41.341372 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:41.341378 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:41.341384 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:41.341390 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:41.341396 | orchestrator | 2026-01-05 00:31:41.341404 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-05 00:31:41.341428 | orchestrator | Monday 05 January 2026 00:31:29 +0000 (0:00:01.595) 0:06:22.803 ******** 2026-01-05 00:31:41.341434 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:41.341438 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:41.341443 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:41.341447 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:41.341452 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:41.341456 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:41.341461 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:41.341465 | orchestrator | 2026-01-05 00:31:41.341470 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-05 00:31:41.341474 | orchestrator | Monday 05 January 2026 00:31:31 +0000 (0:00:01.310) 0:06:24.114 ******** 2026-01-05 00:31:41.341478 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:41.341483 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:41.341487 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:41.341491 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:41.341496 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:41.341500 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:41.341505 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:41.341509 | orchestrator | 2026-01-05 00:31:41.341514 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-05 00:31:41.341518 | orchestrator | Monday 05 January 2026 00:31:32 +0000 (0:00:01.265) 0:06:25.379 ******** 2026-01-05 00:31:41.341522 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:41.341527 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:41.341531 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:41.341535 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:41.341540 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:41.341550 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:41.341554 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:41.341559 | orchestrator | 2026-01-05 00:31:41.341563 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-05 00:31:41.341568 | orchestrator | Monday 05 January 2026 00:31:33 +0000 (0:00:01.423) 0:06:26.803 ******** 2026-01-05 00:31:41.341572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:31:41.341577 | orchestrator | 2026-01-05 00:31:41.341581 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-05 00:31:41.341586 | orchestrator | Monday 05 January 2026 00:31:34 +0000 (0:00:01.070) 0:06:27.873 ******** 2026-01-05 00:31:41.341590 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:41.341595 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:41.341599 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:41.341603 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:41.341607 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:41.341612 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:41.341616 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:41.341620 | orchestrator | 2026-01-05 00:31:41.341625 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-05 00:31:41.341629 | orchestrator | Monday 05 January 2026 00:31:36 +0000 (0:00:01.369) 0:06:29.242 ******** 2026-01-05 00:31:41.341634 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:41.341638 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:41.341643 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:41.341647 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:41.341651 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:41.341655 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:41.341660 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:41.341664 | orchestrator | 2026-01-05 00:31:41.341668 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-05 00:31:41.341672 | orchestrator | Monday 05 January 2026 00:31:37 +0000 (0:00:01.133) 0:06:30.376 ******** 2026-01-05 00:31:41.341676 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:41.341680 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:41.341683 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:41.341687 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:41.341691 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:41.341694 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:41.341698 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:41.341702 | orchestrator | 2026-01-05 00:31:41.341720 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-05 00:31:41.341724 | orchestrator | Monday 05 January 2026 00:31:38 +0000 (0:00:01.139) 0:06:31.516 ******** 2026-01-05 00:31:41.341728 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:41.341732 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:41.341735 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:41.341739 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:41.341743 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:41.341747 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:41.341750 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:41.341754 | orchestrator | 2026-01-05 00:31:41.341758 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-05 00:31:41.341762 | orchestrator | Monday 05 January 2026 00:31:40 +0000 (0:00:01.407) 0:06:32.923 ******** 2026-01-05 00:31:41.341765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:31:41.341769 | orchestrator | 2026-01-05 00:31:41.341773 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:31:41.341777 | orchestrator | Monday 05 January 2026 00:31:40 +0000 (0:00:00.929) 0:06:33.853 ******** 2026-01-05 00:31:41.341784 | orchestrator | 2026-01-05 00:31:41.341788 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:31:41.341792 | orchestrator | Monday 05 January 2026 00:31:40 +0000 (0:00:00.040) 0:06:33.894 ******** 2026-01-05 00:31:41.341795 | orchestrator | 2026-01-05 00:31:41.341799 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:31:41.341803 | orchestrator | Monday 05 January 2026 00:31:41 +0000 (0:00:00.047) 0:06:33.941 ******** 2026-01-05 00:31:41.341807 | orchestrator | 2026-01-05 00:31:41.341810 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:31:41.341817 | orchestrator | Monday 05 January 2026 00:31:41 +0000 (0:00:00.039) 0:06:33.981 ******** 2026-01-05 00:32:06.704428 | orchestrator | 2026-01-05 00:32:06.704593 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:06.704623 | orchestrator | Monday 05 January 2026 00:31:41 +0000 (0:00:00.039) 0:06:34.020 ******** 2026-01-05 00:32:06.704640 | orchestrator | 2026-01-05 00:32:06.704659 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:06.704677 | orchestrator | Monday 05 January 2026 00:31:41 +0000 (0:00:00.073) 0:06:34.094 ******** 2026-01-05 00:32:06.704695 | orchestrator | 2026-01-05 00:32:06.704714 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:32:06.704733 | orchestrator | Monday 05 January 2026 00:31:41 +0000 (0:00:00.056) 0:06:34.150 ******** 2026-01-05 00:32:06.704749 | orchestrator | 2026-01-05 00:32:06.704765 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:32:06.704782 | orchestrator | Monday 05 January 2026 00:31:41 +0000 (0:00:00.072) 0:06:34.222 ******** 2026-01-05 00:32:06.704801 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:06.704820 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:06.704840 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:06.704858 | orchestrator | 2026-01-05 00:32:06.704878 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-05 00:32:06.705034 | orchestrator | Monday 05 January 2026 00:31:42 +0000 (0:00:01.167) 0:06:35.390 ******** 2026-01-05 00:32:06.705054 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:06.705073 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:06.705090 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:06.705107 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:06.705126 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:06.705145 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:06.705164 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:06.705200 | orchestrator | 2026-01-05 00:32:06.705228 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-05 00:32:06.705255 | orchestrator | Monday 05 January 2026 00:31:43 +0000 (0:00:01.512) 0:06:36.902 ******** 2026-01-05 00:32:06.705276 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:06.705295 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:06.705313 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:06.705328 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:06.705346 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:06.705363 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:06.705382 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:06.705401 | orchestrator | 2026-01-05 00:32:06.705419 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-05 00:32:06.705438 | orchestrator | Monday 05 January 2026 00:31:45 +0000 (0:00:01.170) 0:06:38.073 ******** 2026-01-05 00:32:06.705457 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:06.705476 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:06.705496 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:06.705515 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:06.705533 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:06.705552 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:06.705571 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:06.705626 | orchestrator | 2026-01-05 00:32:06.705644 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-05 00:32:06.705662 | orchestrator | Monday 05 January 2026 00:31:47 +0000 (0:00:02.356) 0:06:40.430 ******** 2026-01-05 00:32:06.705679 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:06.705696 | orchestrator | 2026-01-05 00:32:06.705713 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-05 00:32:06.705730 | orchestrator | Monday 05 January 2026 00:31:47 +0000 (0:00:00.103) 0:06:40.534 ******** 2026-01-05 00:32:06.705748 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:06.705766 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:06.705783 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:06.705802 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:06.705820 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:06.705839 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:06.705855 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:06.705874 | orchestrator | 2026-01-05 00:32:06.705945 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-05 00:32:06.705967 | orchestrator | Monday 05 January 2026 00:31:48 +0000 (0:00:01.015) 0:06:41.550 ******** 2026-01-05 00:32:06.705987 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:06.706004 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:06.706152 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:06.706175 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:06.706192 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:06.706211 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:06.706229 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:06.706247 | orchestrator | 2026-01-05 00:32:06.706264 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-05 00:32:06.706283 | orchestrator | Monday 05 January 2026 00:31:49 +0000 (0:00:00.574) 0:06:42.124 ******** 2026-01-05 00:32:06.706304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:06.706328 | orchestrator | 2026-01-05 00:32:06.706349 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-05 00:32:06.706369 | orchestrator | Monday 05 January 2026 00:31:50 +0000 (0:00:01.127) 0:06:43.252 ******** 2026-01-05 00:32:06.706388 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:06.706409 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:06.706430 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:06.706451 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:06.706471 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:06.706488 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:06.706506 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:06.706524 | orchestrator | 2026-01-05 00:32:06.706541 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-05 00:32:06.706559 | orchestrator | Monday 05 January 2026 00:31:51 +0000 (0:00:00.858) 0:06:44.111 ******** 2026-01-05 00:32:06.706577 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-05 00:32:06.706630 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-05 00:32:06.706647 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-05 00:32:06.706663 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-05 00:32:06.706679 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-05 00:32:06.706695 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-05 00:32:06.706711 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-05 00:32:06.706727 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-05 00:32:06.706743 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-05 00:32:06.706759 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-05 00:32:06.706796 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-05 00:32:06.706812 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-05 00:32:06.706828 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-05 00:32:06.706845 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-05 00:32:06.706862 | orchestrator | 2026-01-05 00:32:06.706878 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-05 00:32:06.706916 | orchestrator | Monday 05 January 2026 00:31:53 +0000 (0:00:02.467) 0:06:46.579 ******** 2026-01-05 00:32:06.706932 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:06.706949 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:06.706965 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:06.706981 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:06.706997 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:06.707014 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:06.707030 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:06.707047 | orchestrator | 2026-01-05 00:32:06.707063 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-05 00:32:06.707079 | orchestrator | Monday 05 January 2026 00:31:54 +0000 (0:00:00.804) 0:06:47.383 ******** 2026-01-05 00:32:06.707098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:06.707115 | orchestrator | 2026-01-05 00:32:06.707131 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-05 00:32:06.707146 | orchestrator | Monday 05 January 2026 00:31:55 +0000 (0:00:00.838) 0:06:48.222 ******** 2026-01-05 00:32:06.707161 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:06.707177 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:06.707193 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:06.707208 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:06.707224 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:06.707240 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:06.707256 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:06.707272 | orchestrator | 2026-01-05 00:32:06.707288 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-05 00:32:06.707303 | orchestrator | Monday 05 January 2026 00:31:56 +0000 (0:00:00.831) 0:06:49.054 ******** 2026-01-05 00:32:06.707319 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:06.707335 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:06.707350 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:06.707365 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:06.707379 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:06.707394 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:06.707408 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:06.707424 | orchestrator | 2026-01-05 00:32:06.707441 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-05 00:32:06.707457 | orchestrator | Monday 05 January 2026 00:31:57 +0000 (0:00:01.035) 0:06:50.089 ******** 2026-01-05 00:32:06.707473 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:06.707502 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:06.707517 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:06.707533 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:06.707549 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:06.707564 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:06.707580 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:06.707596 | orchestrator | 2026-01-05 00:32:06.707612 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-05 00:32:06.707628 | orchestrator | Monday 05 January 2026 00:31:57 +0000 (0:00:00.529) 0:06:50.619 ******** 2026-01-05 00:32:06.707643 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:06.707659 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:06.707688 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:06.707705 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:06.707721 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:06.707736 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:06.707752 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:06.707768 | orchestrator | 2026-01-05 00:32:06.707784 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-05 00:32:06.707800 | orchestrator | Monday 05 January 2026 00:31:59 +0000 (0:00:01.396) 0:06:52.015 ******** 2026-01-05 00:32:06.707815 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:06.707830 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:06.707846 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:06.707861 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:06.707877 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:06.707916 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:06.707933 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:06.707949 | orchestrator | 2026-01-05 00:32:06.707964 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-05 00:32:06.707980 | orchestrator | Monday 05 January 2026 00:31:59 +0000 (0:00:00.544) 0:06:52.559 ******** 2026-01-05 00:32:06.707995 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:06.708010 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:06.708026 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:06.708042 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:06.708058 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:06.708073 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:06.708106 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:39.604230 | orchestrator | 2026-01-05 00:32:39.604370 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-05 00:32:39.604388 | orchestrator | Monday 05 January 2026 00:32:06 +0000 (0:00:07.034) 0:06:59.594 ******** 2026-01-05 00:32:39.604401 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.604413 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:39.604425 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:39.604435 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:39.604446 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:39.604457 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:39.604467 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:39.604478 | orchestrator | 2026-01-05 00:32:39.604490 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-05 00:32:39.604500 | orchestrator | Monday 05 January 2026 00:32:08 +0000 (0:00:01.507) 0:07:01.101 ******** 2026-01-05 00:32:39.604511 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.604522 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:39.604532 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:39.604543 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:39.604554 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:39.604564 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:39.604576 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:39.604586 | orchestrator | 2026-01-05 00:32:39.604597 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-05 00:32:39.604608 | orchestrator | Monday 05 January 2026 00:32:09 +0000 (0:00:01.665) 0:07:02.767 ******** 2026-01-05 00:32:39.604619 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.604630 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:39.604640 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:39.604651 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:39.604662 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:39.604672 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:39.604683 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:39.604694 | orchestrator | 2026-01-05 00:32:39.604705 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:32:39.604716 | orchestrator | Monday 05 January 2026 00:32:11 +0000 (0:00:01.639) 0:07:04.406 ******** 2026-01-05 00:32:39.604752 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.604765 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:39.604777 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:39.604790 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:39.604803 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:39.604816 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:39.604828 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:39.604841 | orchestrator | 2026-01-05 00:32:39.604854 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:32:39.604867 | orchestrator | Monday 05 January 2026 00:32:12 +0000 (0:00:00.831) 0:07:05.238 ******** 2026-01-05 00:32:39.604880 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:39.604921 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:39.604935 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:39.604947 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:39.604960 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:39.604973 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:39.604985 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:39.605001 | orchestrator | 2026-01-05 00:32:39.605019 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-05 00:32:39.605039 | orchestrator | Monday 05 January 2026 00:32:13 +0000 (0:00:01.020) 0:07:06.258 ******** 2026-01-05 00:32:39.605059 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:39.605077 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:39.605095 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:39.605107 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:39.605117 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:39.605128 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:39.605138 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:39.605149 | orchestrator | 2026-01-05 00:32:39.605159 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-05 00:32:39.605171 | orchestrator | Monday 05 January 2026 00:32:13 +0000 (0:00:00.524) 0:07:06.783 ******** 2026-01-05 00:32:39.605184 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.605202 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:39.605220 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:39.605237 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:39.605281 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:39.605300 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:39.605318 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:39.605335 | orchestrator | 2026-01-05 00:32:39.605353 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-05 00:32:39.605372 | orchestrator | Monday 05 January 2026 00:32:14 +0000 (0:00:00.600) 0:07:07.383 ******** 2026-01-05 00:32:39.605391 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.605409 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:39.605428 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:39.605446 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:39.605464 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:39.605483 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:39.605502 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:39.605513 | orchestrator | 2026-01-05 00:32:39.605524 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-05 00:32:39.605535 | orchestrator | Monday 05 January 2026 00:32:15 +0000 (0:00:00.566) 0:07:07.950 ******** 2026-01-05 00:32:39.605546 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.605557 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:39.605567 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:39.605578 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:39.605588 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:39.605599 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:39.605609 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:39.605619 | orchestrator | 2026-01-05 00:32:39.605630 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-05 00:32:39.605641 | orchestrator | Monday 05 January 2026 00:32:15 +0000 (0:00:00.775) 0:07:08.726 ******** 2026-01-05 00:32:39.605664 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.605675 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:39.605685 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:39.605696 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:39.605706 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:39.605717 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:39.605727 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:39.605738 | orchestrator | 2026-01-05 00:32:39.605769 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-05 00:32:39.605781 | orchestrator | Monday 05 January 2026 00:32:21 +0000 (0:00:05.518) 0:07:14.244 ******** 2026-01-05 00:32:39.605792 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:39.605803 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:39.605813 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:39.605824 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:39.605834 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:39.605845 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:39.605856 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:39.605866 | orchestrator | 2026-01-05 00:32:39.605877 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-05 00:32:39.605887 | orchestrator | Monday 05 January 2026 00:32:21 +0000 (0:00:00.547) 0:07:14.792 ******** 2026-01-05 00:32:39.605920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:39.605933 | orchestrator | 2026-01-05 00:32:39.605944 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-05 00:32:39.605955 | orchestrator | Monday 05 January 2026 00:32:22 +0000 (0:00:01.031) 0:07:15.824 ******** 2026-01-05 00:32:39.605966 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:39.605976 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.605993 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:39.606013 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:39.606107 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:39.606129 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:39.606148 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:39.606167 | orchestrator | 2026-01-05 00:32:39.606187 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-05 00:32:39.606209 | orchestrator | Monday 05 January 2026 00:32:24 +0000 (0:00:01.886) 0:07:17.711 ******** 2026-01-05 00:32:39.606222 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:39.606232 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:39.606243 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:39.606254 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:39.606265 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.606275 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:39.606286 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:39.606296 | orchestrator | 2026-01-05 00:32:39.606307 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-05 00:32:39.606318 | orchestrator | Monday 05 January 2026 00:32:26 +0000 (0:00:01.967) 0:07:19.679 ******** 2026-01-05 00:32:39.606329 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:39.606339 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:39.606350 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:39.606360 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:39.606371 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:39.606382 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:39.606392 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:39.606403 | orchestrator | 2026-01-05 00:32:39.606413 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-05 00:32:39.606424 | orchestrator | Monday 05 January 2026 00:32:27 +0000 (0:00:00.840) 0:07:20.519 ******** 2026-01-05 00:32:39.606435 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:32:39.606458 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:32:39.606469 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:32:39.606487 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:32:39.606499 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:32:39.606509 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:32:39.606520 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:32:39.606531 | orchestrator | 2026-01-05 00:32:39.606541 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-05 00:32:39.606552 | orchestrator | Monday 05 January 2026 00:32:29 +0000 (0:00:01.922) 0:07:22.442 ******** 2026-01-05 00:32:39.606564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:39.606575 | orchestrator | 2026-01-05 00:32:39.606586 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-05 00:32:39.606597 | orchestrator | Monday 05 January 2026 00:32:30 +0000 (0:00:00.899) 0:07:23.341 ******** 2026-01-05 00:32:39.606608 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:39.606619 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:39.606629 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:39.606640 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:39.606651 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:39.606662 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:39.606673 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:39.606683 | orchestrator | 2026-01-05 00:32:39.606705 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-05 00:33:10.217221 | orchestrator | Monday 05 January 2026 00:32:39 +0000 (0:00:09.160) 0:07:32.501 ******** 2026-01-05 00:33:10.217358 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:10.217376 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:10.217388 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:10.217399 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:10.217410 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:10.217421 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:10.217432 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:10.217442 | orchestrator | 2026-01-05 00:33:10.217455 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-05 00:33:10.217466 | orchestrator | Monday 05 January 2026 00:32:41 +0000 (0:00:01.985) 0:07:34.487 ******** 2026-01-05 00:33:10.217477 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:10.217488 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:10.217498 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:10.217509 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:10.217520 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:10.217531 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:10.217541 | orchestrator | 2026-01-05 00:33:10.217552 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-05 00:33:10.217563 | orchestrator | Monday 05 January 2026 00:32:42 +0000 (0:00:01.288) 0:07:35.775 ******** 2026-01-05 00:33:10.217574 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:10.217587 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:10.217659 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:10.217678 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:10.217697 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:10.217716 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:10.217736 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:10.217756 | orchestrator | 2026-01-05 00:33:10.217775 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-05 00:33:10.217788 | orchestrator | 2026-01-05 00:33:10.217801 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-05 00:33:10.217813 | orchestrator | Monday 05 January 2026 00:32:44 +0000 (0:00:01.254) 0:07:37.029 ******** 2026-01-05 00:33:10.217825 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:10.217838 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:10.217851 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:10.217863 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:10.217875 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:10.217887 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:10.217929 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:10.217941 | orchestrator | 2026-01-05 00:33:10.217953 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-05 00:33:10.217964 | orchestrator | 2026-01-05 00:33:10.217975 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-05 00:33:10.217989 | orchestrator | Monday 05 January 2026 00:32:44 +0000 (0:00:00.757) 0:07:37.786 ******** 2026-01-05 00:33:10.218008 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:10.218100 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:10.218112 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:10.218123 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:10.218134 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:10.218145 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:10.218155 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:10.218166 | orchestrator | 2026-01-05 00:33:10.218177 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-05 00:33:10.218188 | orchestrator | Monday 05 January 2026 00:32:46 +0000 (0:00:01.347) 0:07:39.134 ******** 2026-01-05 00:33:10.218199 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:10.218210 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:10.218221 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:10.218232 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:10.218243 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:10.218254 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:10.218264 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:10.218275 | orchestrator | 2026-01-05 00:33:10.218286 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-05 00:33:10.218297 | orchestrator | Monday 05 January 2026 00:32:47 +0000 (0:00:01.449) 0:07:40.583 ******** 2026-01-05 00:33:10.218325 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:10.218337 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:10.218348 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:10.218358 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:10.218369 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:10.218379 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:10.218390 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:10.218402 | orchestrator | 2026-01-05 00:33:10.218413 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-05 00:33:10.218424 | orchestrator | Monday 05 January 2026 00:32:48 +0000 (0:00:00.557) 0:07:41.140 ******** 2026-01-05 00:33:10.218436 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:10.218448 | orchestrator | 2026-01-05 00:33:10.218459 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-05 00:33:10.218470 | orchestrator | Monday 05 January 2026 00:32:49 +0000 (0:00:01.032) 0:07:42.173 ******** 2026-01-05 00:33:10.218494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:10.218508 | orchestrator | 2026-01-05 00:33:10.218518 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-05 00:33:10.218529 | orchestrator | Monday 05 January 2026 00:32:50 +0000 (0:00:00.853) 0:07:43.026 ******** 2026-01-05 00:33:10.218540 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:10.218551 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:10.218562 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:10.218572 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:10.218583 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:10.218594 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:10.218604 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:10.218615 | orchestrator | 2026-01-05 00:33:10.218647 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-05 00:33:10.218659 | orchestrator | Monday 05 January 2026 00:32:58 +0000 (0:00:08.503) 0:07:51.530 ******** 2026-01-05 00:33:10.218670 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:10.218681 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:10.218692 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:10.218702 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:10.218713 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:10.218724 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:10.218734 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:10.218745 | orchestrator | 2026-01-05 00:33:10.218756 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-05 00:33:10.218767 | orchestrator | Monday 05 January 2026 00:32:59 +0000 (0:00:00.841) 0:07:52.371 ******** 2026-01-05 00:33:10.218778 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:10.218788 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:10.218799 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:10.218810 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:10.218820 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:10.218831 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:10.218841 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:10.218852 | orchestrator | 2026-01-05 00:33:10.218863 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-05 00:33:10.218874 | orchestrator | Monday 05 January 2026 00:33:00 +0000 (0:00:01.351) 0:07:53.723 ******** 2026-01-05 00:33:10.218885 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:10.218962 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:10.218973 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:10.218984 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:10.218995 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:10.219006 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:10.219017 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:10.219028 | orchestrator | 2026-01-05 00:33:10.219039 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-05 00:33:10.219050 | orchestrator | Monday 05 January 2026 00:33:02 +0000 (0:00:01.953) 0:07:55.677 ******** 2026-01-05 00:33:10.219061 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:10.219072 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:10.219082 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:10.219093 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:10.219104 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:10.219115 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:10.219126 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:10.219136 | orchestrator | 2026-01-05 00:33:10.219147 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-05 00:33:10.219158 | orchestrator | Monday 05 January 2026 00:33:03 +0000 (0:00:01.233) 0:07:56.910 ******** 2026-01-05 00:33:10.219178 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:10.219189 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:10.219200 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:10.219210 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:10.219221 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:10.219232 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:10.219243 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:10.219254 | orchestrator | 2026-01-05 00:33:10.219265 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-05 00:33:10.219276 | orchestrator | 2026-01-05 00:33:10.219286 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-05 00:33:10.219297 | orchestrator | Monday 05 January 2026 00:33:05 +0000 (0:00:01.103) 0:07:58.014 ******** 2026-01-05 00:33:10.219309 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:10.219320 | orchestrator | 2026-01-05 00:33:10.219331 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-05 00:33:10.219348 | orchestrator | Monday 05 January 2026 00:33:05 +0000 (0:00:00.869) 0:07:58.883 ******** 2026-01-05 00:33:10.219359 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:10.219370 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:10.219381 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:10.219391 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:10.219402 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:10.219413 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:10.219424 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:10.219434 | orchestrator | 2026-01-05 00:33:10.219445 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-05 00:33:10.219456 | orchestrator | Monday 05 January 2026 00:33:07 +0000 (0:00:01.110) 0:07:59.994 ******** 2026-01-05 00:33:10.219467 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:10.219478 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:10.219489 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:10.219500 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:10.219511 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:10.219521 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:10.219532 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:10.219543 | orchestrator | 2026-01-05 00:33:10.219554 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-05 00:33:10.219565 | orchestrator | Monday 05 January 2026 00:33:08 +0000 (0:00:01.193) 0:08:01.187 ******** 2026-01-05 00:33:10.219576 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:33:10.219587 | orchestrator | 2026-01-05 00:33:10.219598 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-05 00:33:10.219609 | orchestrator | Monday 05 January 2026 00:33:09 +0000 (0:00:01.074) 0:08:02.261 ******** 2026-01-05 00:33:10.219620 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:10.219630 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:10.219641 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:10.219652 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:10.219663 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:10.219681 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:10.219701 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:10.219719 | orchestrator | 2026-01-05 00:33:10.219752 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-05 00:33:11.852387 | orchestrator | Monday 05 January 2026 00:33:10 +0000 (0:00:00.846) 0:08:03.108 ******** 2026-01-05 00:33:11.852500 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:11.852516 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:11.852524 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:11.852530 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:11.852564 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:11.852572 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:11.852578 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:11.852585 | orchestrator | 2026-01-05 00:33:11.852593 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:33:11.852602 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-05 00:33:11.852611 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:33:11.852618 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:33:11.852625 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:33:11.852632 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-05 00:33:11.852639 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-05 00:33:11.852646 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-05 00:33:11.852653 | orchestrator | 2026-01-05 00:33:11.852660 | orchestrator | 2026-01-05 00:33:11.852667 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:33:11.852675 | orchestrator | Monday 05 January 2026 00:33:11 +0000 (0:00:01.097) 0:08:04.205 ******** 2026-01-05 00:33:11.852682 | orchestrator | =============================================================================== 2026-01-05 00:33:11.852689 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.67s 2026-01-05 00:33:11.852696 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.39s 2026-01-05 00:33:11.852703 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.44s 2026-01-05 00:33:11.852710 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.40s 2026-01-05 00:33:11.852717 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.22s 2026-01-05 00:33:11.852725 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.04s 2026-01-05 00:33:11.852733 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.95s 2026-01-05 00:33:11.852740 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.88s 2026-01-05 00:33:11.852747 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.16s 2026-01-05 00:33:11.852754 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.82s 2026-01-05 00:33:11.852778 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.52s 2026-01-05 00:33:11.852786 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.50s 2026-01-05 00:33:11.852793 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.39s 2026-01-05 00:33:11.852800 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.97s 2026-01-05 00:33:11.852807 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.50s 2026-01-05 00:33:11.852814 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.03s 2026-01-05 00:33:11.852821 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.52s 2026-01-05 00:33:11.852828 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.93s 2026-01-05 00:33:11.852835 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.73s 2026-01-05 00:33:11.852850 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.66s 2026-01-05 00:33:12.226965 | orchestrator | + osism apply fail2ban 2026-01-05 00:33:25.187352 | orchestrator | 2026-01-05 00:33:25 | INFO  | Task 1256eec1-05fa-469e-9c0a-483c6d02035e (fail2ban) was prepared for execution. 2026-01-05 00:33:25.187464 | orchestrator | 2026-01-05 00:33:25 | INFO  | It takes a moment until task 1256eec1-05fa-469e-9c0a-483c6d02035e (fail2ban) has been started and output is visible here. 2026-01-05 00:33:47.657167 | orchestrator | 2026-01-05 00:33:47.657276 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-05 00:33:47.657290 | orchestrator | 2026-01-05 00:33:47.657301 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-05 00:33:47.657311 | orchestrator | Monday 05 January 2026 00:33:29 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-01-05 00:33:47.657321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:33:47.657332 | orchestrator | 2026-01-05 00:33:47.657341 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-05 00:33:47.657350 | orchestrator | Monday 05 January 2026 00:33:31 +0000 (0:00:01.223) 0:00:01.519 ******** 2026-01-05 00:33:47.657359 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.657370 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.657381 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.657396 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.657411 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.657424 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.657438 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.657452 | orchestrator | 2026-01-05 00:33:47.657467 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-05 00:33:47.657482 | orchestrator | Monday 05 January 2026 00:33:42 +0000 (0:00:11.420) 0:00:12.940 ******** 2026-01-05 00:33:47.657498 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.657513 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.657528 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.657539 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.657554 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.657576 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.657593 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.657608 | orchestrator | 2026-01-05 00:33:47.657622 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-05 00:33:47.657636 | orchestrator | Monday 05 January 2026 00:33:44 +0000 (0:00:01.459) 0:00:14.400 ******** 2026-01-05 00:33:47.657650 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:33:47.657664 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:33:47.657678 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:33:47.657692 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:47.657707 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:33:47.657721 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:33:47.657735 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:33:47.657749 | orchestrator | 2026-01-05 00:33:47.657765 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-05 00:33:47.657780 | orchestrator | Monday 05 January 2026 00:33:45 +0000 (0:00:01.490) 0:00:15.890 ******** 2026-01-05 00:33:47.657796 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:47.657810 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:47.657825 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:47.657839 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:47.657855 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:47.657900 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:47.657914 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:47.657969 | orchestrator | 2026-01-05 00:33:47.657987 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:33:47.658002 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:33:47.658100 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:33:47.658122 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:33:47.658138 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:33:47.658153 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:33:47.658169 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:33:47.658184 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:33:47.658200 | orchestrator | 2026-01-05 00:33:47.658215 | orchestrator | 2026-01-05 00:33:47.658231 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:33:47.658246 | orchestrator | Monday 05 January 2026 00:33:47 +0000 (0:00:01.653) 0:00:17.544 ******** 2026-01-05 00:33:47.658261 | orchestrator | =============================================================================== 2026-01-05 00:33:47.658275 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.42s 2026-01-05 00:33:47.658289 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-01-05 00:33:47.658304 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.49s 2026-01-05 00:33:47.658319 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.46s 2026-01-05 00:33:47.658334 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.22s 2026-01-05 00:33:47.980773 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-05 00:33:47.980942 | orchestrator | + osism apply network 2026-01-05 00:34:00.297936 | orchestrator | 2026-01-05 00:34:00 | INFO  | Task 1c70fe5a-7c77-4543-85c1-39fc83661a4d (network) was prepared for execution. 2026-01-05 00:34:00.298090 | orchestrator | 2026-01-05 00:34:00 | INFO  | It takes a moment until task 1c70fe5a-7c77-4543-85c1-39fc83661a4d (network) has been started and output is visible here. 2026-01-05 00:34:29.667440 | orchestrator | 2026-01-05 00:34:29.667470 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-05 00:34:29.667476 | orchestrator | 2026-01-05 00:34:29.667481 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-05 00:34:29.667486 | orchestrator | Monday 05 January 2026 00:34:04 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-01-05 00:34:29.667490 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.667495 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.667500 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.667504 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.667508 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.667512 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.667516 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.667520 | orchestrator | 2026-01-05 00:34:29.667524 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-05 00:34:29.667528 | orchestrator | Monday 05 January 2026 00:34:05 +0000 (0:00:00.749) 0:00:01.014 ******** 2026-01-05 00:34:29.667534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:34:29.667546 | orchestrator | 2026-01-05 00:34:29.667550 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-05 00:34:29.667554 | orchestrator | Monday 05 January 2026 00:34:06 +0000 (0:00:01.247) 0:00:02.261 ******** 2026-01-05 00:34:29.667558 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.667562 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.667566 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.667570 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.667574 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.667577 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.667581 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.667585 | orchestrator | 2026-01-05 00:34:29.667589 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-05 00:34:29.667593 | orchestrator | Monday 05 January 2026 00:34:08 +0000 (0:00:02.000) 0:00:04.262 ******** 2026-01-05 00:34:29.667597 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.667601 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.667605 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.667609 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.667612 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.667616 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.667620 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.667624 | orchestrator | 2026-01-05 00:34:29.667628 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-05 00:34:29.667632 | orchestrator | Monday 05 January 2026 00:34:10 +0000 (0:00:01.848) 0:00:06.110 ******** 2026-01-05 00:34:29.667636 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-05 00:34:29.667640 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-05 00:34:29.667644 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-05 00:34:29.667648 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-05 00:34:29.667652 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-05 00:34:29.667656 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-05 00:34:29.667660 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-05 00:34:29.667664 | orchestrator | 2026-01-05 00:34:29.667668 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-05 00:34:29.667678 | orchestrator | Monday 05 January 2026 00:34:11 +0000 (0:00:01.004) 0:00:07.115 ******** 2026-01-05 00:34:29.667682 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 00:34:29.667696 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 00:34:29.667700 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 00:34:29.667704 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:34:29.667715 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:34:29.667718 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 00:34:29.667722 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 00:34:29.667726 | orchestrator | 2026-01-05 00:34:29.667730 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-05 00:34:29.667737 | orchestrator | Monday 05 January 2026 00:34:14 +0000 (0:00:03.388) 0:00:10.504 ******** 2026-01-05 00:34:29.667741 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:29.667745 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:29.667748 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:29.667752 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:29.667756 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:29.667760 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:29.667764 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:29.667768 | orchestrator | 2026-01-05 00:34:29.667772 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-05 00:34:29.667776 | orchestrator | Monday 05 January 2026 00:34:16 +0000 (0:00:01.646) 0:00:12.150 ******** 2026-01-05 00:34:29.667780 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:34:29.667783 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:34:29.667791 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 00:34:29.667795 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 00:34:29.667799 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 00:34:29.667803 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 00:34:29.667807 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 00:34:29.667811 | orchestrator | 2026-01-05 00:34:29.667815 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-05 00:34:29.667819 | orchestrator | Monday 05 January 2026 00:34:18 +0000 (0:00:01.752) 0:00:13.903 ******** 2026-01-05 00:34:29.667848 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.667852 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.667856 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.667860 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.667864 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.667868 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.667872 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.667876 | orchestrator | 2026-01-05 00:34:29.667880 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-05 00:34:29.667889 | orchestrator | Monday 05 January 2026 00:34:19 +0000 (0:00:01.192) 0:00:15.096 ******** 2026-01-05 00:34:29.667893 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:29.667897 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:29.667901 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:29.667905 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:29.667908 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:29.667912 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:29.667916 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:29.667920 | orchestrator | 2026-01-05 00:34:29.667924 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-05 00:34:29.667928 | orchestrator | Monday 05 January 2026 00:34:20 +0000 (0:00:00.753) 0:00:15.849 ******** 2026-01-05 00:34:29.667932 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.667936 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.667940 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.667943 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.667947 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.667951 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.667955 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.667959 | orchestrator | 2026-01-05 00:34:29.667963 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-05 00:34:29.667967 | orchestrator | Monday 05 January 2026 00:34:22 +0000 (0:00:02.209) 0:00:18.059 ******** 2026-01-05 00:34:29.667971 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:29.667974 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:29.667978 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:29.667982 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:29.667986 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:29.667990 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:29.667995 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-05 00:34:29.668000 | orchestrator | 2026-01-05 00:34:29.668004 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-05 00:34:29.668008 | orchestrator | Monday 05 January 2026 00:34:23 +0000 (0:00:00.899) 0:00:18.959 ******** 2026-01-05 00:34:29.668012 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.668016 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:29.668020 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:29.668024 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:29.668028 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:29.668031 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:29.668035 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:29.668039 | orchestrator | 2026-01-05 00:34:29.668043 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-05 00:34:29.668052 | orchestrator | Monday 05 January 2026 00:34:25 +0000 (0:00:01.790) 0:00:20.750 ******** 2026-01-05 00:34:29.668057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:34:29.668062 | orchestrator | 2026-01-05 00:34:29.668067 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-05 00:34:29.668072 | orchestrator | Monday 05 January 2026 00:34:26 +0000 (0:00:01.292) 0:00:22.042 ******** 2026-01-05 00:34:29.668076 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.668081 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.668086 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.668090 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.668094 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.668099 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.668103 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.668108 | orchestrator | 2026-01-05 00:34:29.668112 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-05 00:34:29.668117 | orchestrator | Monday 05 January 2026 00:34:27 +0000 (0:00:01.181) 0:00:23.223 ******** 2026-01-05 00:34:29.668121 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.668126 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.668130 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.668134 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.668139 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.668143 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.668151 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.668156 | orchestrator | 2026-01-05 00:34:29.668160 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-05 00:34:29.668164 | orchestrator | Monday 05 January 2026 00:34:28 +0000 (0:00:00.666) 0:00:23.890 ******** 2026-01-05 00:34:29.668169 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:34:29.668175 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:34:29.668179 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:34:29.668184 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:34:29.668188 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:34:29.668192 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:34:29.668197 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:34:29.668201 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:34:29.668206 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:34:29.668210 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:34:29.668215 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:34:29.668219 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:34:29.668224 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:34:29.668228 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:34:29.668233 | orchestrator | 2026-01-05 00:34:29.668239 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-05 00:34:47.234439 | orchestrator | Monday 05 January 2026 00:34:29 +0000 (0:00:01.307) 0:00:25.198 ******** 2026-01-05 00:34:47.234543 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:47.234551 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:47.234557 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:47.234561 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:47.234566 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:47.234587 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:47.234591 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:47.234595 | orchestrator | 2026-01-05 00:34:47.234600 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-05 00:34:47.234604 | orchestrator | Monday 05 January 2026 00:34:30 +0000 (0:00:00.789) 0:00:25.988 ******** 2026-01-05 00:34:47.234610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-2, testbed-node-3, testbed-node-5 2026-01-05 00:34:47.234616 | orchestrator | 2026-01-05 00:34:47.234620 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-05 00:34:47.234624 | orchestrator | Monday 05 January 2026 00:34:35 +0000 (0:00:04.599) 0:00:30.587 ******** 2026-01-05 00:34:47.234629 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234648 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234724 | orchestrator | 2026-01-05 00:34:47.234728 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-05 00:34:47.234732 | orchestrator | Monday 05 January 2026 00:34:41 +0000 (0:00:06.011) 0:00:36.599 ******** 2026-01-05 00:34:47.234736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234740 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234777 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:34:47.234792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:47.234805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:53.830997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:34:53.831133 | orchestrator | 2026-01-05 00:34:53.831153 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-05 00:34:53.831166 | orchestrator | Monday 05 January 2026 00:34:47 +0000 (0:00:06.161) 0:00:42.761 ******** 2026-01-05 00:34:53.831180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:34:53.831192 | orchestrator | 2026-01-05 00:34:53.831203 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-05 00:34:53.831214 | orchestrator | Monday 05 January 2026 00:34:48 +0000 (0:00:01.298) 0:00:44.059 ******** 2026-01-05 00:34:53.831225 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:53.831237 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:53.831248 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:53.831258 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:53.831269 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:53.831280 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:53.831290 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:53.831301 | orchestrator | 2026-01-05 00:34:53.831312 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-05 00:34:53.831323 | orchestrator | Monday 05 January 2026 00:34:49 +0000 (0:00:01.233) 0:00:45.293 ******** 2026-01-05 00:34:53.831334 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:34:53.831346 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:34:53.831357 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:34:53.831367 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:34:53.831378 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:53.831390 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:34:53.831400 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:34:53.831411 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:34:53.831422 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:34:53.831433 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:53.831446 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:34:53.831459 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:34:53.831500 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:34:53.831512 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:34:53.831525 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:53.831538 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:34:53.831568 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:34:53.831611 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:34:53.831623 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:34:53.831634 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:53.831645 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:34:53.831656 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:34:53.831666 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:34:53.831677 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:34:53.831687 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:53.831698 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:34:53.831709 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:34:53.831719 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:34:53.831730 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:34:53.831740 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:53.831751 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:34:53.831762 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:34:53.831772 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:34:53.831783 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:34:53.831793 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:53.831804 | orchestrator | 2026-01-05 00:34:53.831842 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-05 00:34:53.831874 | orchestrator | Monday 05 January 2026 00:34:51 +0000 (0:00:02.198) 0:00:47.492 ******** 2026-01-05 00:34:53.831885 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:53.831896 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:53.831907 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:53.831917 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:53.831929 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:53.831948 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:53.831966 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:53.831983 | orchestrator | 2026-01-05 00:34:53.832001 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-05 00:34:53.832020 | orchestrator | Monday 05 January 2026 00:34:52 +0000 (0:00:00.653) 0:00:48.146 ******** 2026-01-05 00:34:53.832039 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:53.832060 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:53.832072 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:53.832083 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:53.832093 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:53.832104 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:53.832114 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:53.832125 | orchestrator | 2026-01-05 00:34:53.832136 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:34:53.832148 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 00:34:53.832171 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:34:53.832183 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:34:53.832194 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:34:53.832204 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:34:53.832215 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:34:53.832226 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:34:53.832237 | orchestrator | 2026-01-05 00:34:53.832248 | orchestrator | 2026-01-05 00:34:53.832258 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:34:53.832269 | orchestrator | Monday 05 January 2026 00:34:53 +0000 (0:00:00.778) 0:00:48.924 ******** 2026-01-05 00:34:53.832280 | orchestrator | =============================================================================== 2026-01-05 00:34:53.832291 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.16s 2026-01-05 00:34:53.832302 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.01s 2026-01-05 00:34:53.832312 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.60s 2026-01-05 00:34:53.832323 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.39s 2026-01-05 00:34:53.832334 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.21s 2026-01-05 00:34:53.832350 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.20s 2026-01-05 00:34:53.832361 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.00s 2026-01-05 00:34:53.832372 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.85s 2026-01-05 00:34:53.832383 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.79s 2026-01-05 00:34:53.832394 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.75s 2026-01-05 00:34:53.832404 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.65s 2026-01-05 00:34:53.832415 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.31s 2026-01-05 00:34:53.832426 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.30s 2026-01-05 00:34:53.832436 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2026-01-05 00:34:53.832447 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.25s 2026-01-05 00:34:53.832458 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.23s 2026-01-05 00:34:53.832468 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2026-01-05 00:34:53.832479 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-01-05 00:34:53.832489 | orchestrator | osism.commons.network : Create required directories --------------------- 1.00s 2026-01-05 00:34:53.832500 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.90s 2026-01-05 00:34:54.194221 | orchestrator | + osism apply wireguard 2026-01-05 00:35:06.351709 | orchestrator | 2026-01-05 00:35:06 | INFO  | Task 2582ed82-0a37-4e5b-82a3-6a20bffca863 (wireguard) was prepared for execution. 2026-01-05 00:35:06.351845 | orchestrator | 2026-01-05 00:35:06 | INFO  | It takes a moment until task 2582ed82-0a37-4e5b-82a3-6a20bffca863 (wireguard) has been started and output is visible here. 2026-01-05 00:35:27.730517 | orchestrator | 2026-01-05 00:35:27.730660 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-05 00:35:27.730690 | orchestrator | 2026-01-05 00:35:27.730711 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-05 00:35:27.730732 | orchestrator | Monday 05 January 2026 00:35:10 +0000 (0:00:00.253) 0:00:00.253 ******** 2026-01-05 00:35:27.730753 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:27.730767 | orchestrator | 2026-01-05 00:35:27.730783 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-05 00:35:27.730869 | orchestrator | Monday 05 January 2026 00:35:12 +0000 (0:00:01.670) 0:00:01.923 ******** 2026-01-05 00:35:27.730885 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:27.730897 | orchestrator | 2026-01-05 00:35:27.730909 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-05 00:35:27.730920 | orchestrator | Monday 05 January 2026 00:35:19 +0000 (0:00:07.172) 0:00:09.095 ******** 2026-01-05 00:35:27.730931 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:27.730942 | orchestrator | 2026-01-05 00:35:27.730954 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-05 00:35:27.730965 | orchestrator | Monday 05 January 2026 00:35:20 +0000 (0:00:00.599) 0:00:09.695 ******** 2026-01-05 00:35:27.730975 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:27.730986 | orchestrator | 2026-01-05 00:35:27.730997 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-05 00:35:27.731008 | orchestrator | Monday 05 January 2026 00:35:20 +0000 (0:00:00.486) 0:00:10.181 ******** 2026-01-05 00:35:27.731019 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:27.731030 | orchestrator | 2026-01-05 00:35:27.731041 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-05 00:35:27.731052 | orchestrator | Monday 05 January 2026 00:35:21 +0000 (0:00:00.684) 0:00:10.866 ******** 2026-01-05 00:35:27.731063 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:27.731073 | orchestrator | 2026-01-05 00:35:27.731084 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-05 00:35:27.731095 | orchestrator | Monday 05 January 2026 00:35:21 +0000 (0:00:00.427) 0:00:11.294 ******** 2026-01-05 00:35:27.731106 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:27.731116 | orchestrator | 2026-01-05 00:35:27.731127 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-05 00:35:27.731138 | orchestrator | Monday 05 January 2026 00:35:22 +0000 (0:00:00.454) 0:00:11.749 ******** 2026-01-05 00:35:27.731149 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:27.731159 | orchestrator | 2026-01-05 00:35:27.731175 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-05 00:35:27.731195 | orchestrator | Monday 05 January 2026 00:35:23 +0000 (0:00:01.229) 0:00:12.979 ******** 2026-01-05 00:35:27.731216 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:35:27.731235 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:27.731253 | orchestrator | 2026-01-05 00:35:27.731273 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-05 00:35:27.731293 | orchestrator | Monday 05 January 2026 00:35:24 +0000 (0:00:00.962) 0:00:13.941 ******** 2026-01-05 00:35:27.731314 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:27.731334 | orchestrator | 2026-01-05 00:35:27.731355 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-05 00:35:27.731377 | orchestrator | Monday 05 January 2026 00:35:26 +0000 (0:00:01.890) 0:00:15.832 ******** 2026-01-05 00:35:27.731397 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:27.731410 | orchestrator | 2026-01-05 00:35:27.731421 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:35:27.731432 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:35:27.731477 | orchestrator | 2026-01-05 00:35:27.731488 | orchestrator | 2026-01-05 00:35:27.731500 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:35:27.731511 | orchestrator | Monday 05 January 2026 00:35:27 +0000 (0:00:00.989) 0:00:16.822 ******** 2026-01-05 00:35:27.731522 | orchestrator | =============================================================================== 2026-01-05 00:35:27.731532 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.17s 2026-01-05 00:35:27.731543 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.89s 2026-01-05 00:35:27.731554 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.67s 2026-01-05 00:35:27.731564 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2026-01-05 00:35:27.731575 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2026-01-05 00:35:27.731585 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-01-05 00:35:27.731596 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2026-01-05 00:35:27.731607 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.60s 2026-01-05 00:35:27.731617 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.49s 2026-01-05 00:35:27.731628 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2026-01-05 00:35:27.731638 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-01-05 00:35:28.062894 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-05 00:35:28.099333 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-05 00:35:28.099442 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-05 00:35:28.174495 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 188 0 --:--:-- --:--:-- --:--:-- 189 2026-01-05 00:35:28.190568 | orchestrator | + osism apply --environment custom workarounds 2026-01-05 00:35:30.209635 | orchestrator | 2026-01-05 00:35:30 | INFO  | Trying to run play workarounds in environment custom 2026-01-05 00:35:40.384428 | orchestrator | 2026-01-05 00:35:40 | INFO  | Task 216abfae-8896-440b-9270-72a824521fef (workarounds) was prepared for execution. 2026-01-05 00:35:40.384579 | orchestrator | 2026-01-05 00:35:40 | INFO  | It takes a moment until task 216abfae-8896-440b-9270-72a824521fef (workarounds) has been started and output is visible here. 2026-01-05 00:36:06.090077 | orchestrator | 2026-01-05 00:36:06.090243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:36:06.090274 | orchestrator | 2026-01-05 00:36:06.090295 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-05 00:36:06.090308 | orchestrator | Monday 05 January 2026 00:35:44 +0000 (0:00:00.136) 0:00:00.136 ******** 2026-01-05 00:36:06.090320 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-05 00:36:06.090332 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-05 00:36:06.090343 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-05 00:36:06.090354 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-05 00:36:06.090365 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-05 00:36:06.090376 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-05 00:36:06.090387 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-05 00:36:06.090398 | orchestrator | 2026-01-05 00:36:06.090409 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-05 00:36:06.090420 | orchestrator | 2026-01-05 00:36:06.090431 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-05 00:36:06.090467 | orchestrator | Monday 05 January 2026 00:35:45 +0000 (0:00:00.823) 0:00:00.960 ******** 2026-01-05 00:36:06.090479 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:06.090492 | orchestrator | 2026-01-05 00:36:06.090503 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-05 00:36:06.090514 | orchestrator | 2026-01-05 00:36:06.090527 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-05 00:36:06.090546 | orchestrator | Monday 05 January 2026 00:35:47 +0000 (0:00:02.486) 0:00:03.447 ******** 2026-01-05 00:36:06.090565 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:06.090584 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:06.090602 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:06.090621 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:06.090641 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:06.090660 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:06.090674 | orchestrator | 2026-01-05 00:36:06.090687 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-05 00:36:06.090700 | orchestrator | 2026-01-05 00:36:06.090713 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-05 00:36:06.090726 | orchestrator | Monday 05 January 2026 00:35:49 +0000 (0:00:01.819) 0:00:05.266 ******** 2026-01-05 00:36:06.090740 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:06.090783 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:06.090799 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:06.090838 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:06.090859 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:06.090877 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:36:06.090895 | orchestrator | 2026-01-05 00:36:06.090913 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-05 00:36:06.090931 | orchestrator | Monday 05 January 2026 00:35:51 +0000 (0:00:01.549) 0:00:06.815 ******** 2026-01-05 00:36:06.090949 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:06.090967 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:06.090984 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:06.091001 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:06.091018 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:06.091036 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:06.091052 | orchestrator | 2026-01-05 00:36:06.091070 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-05 00:36:06.091086 | orchestrator | Monday 05 January 2026 00:35:55 +0000 (0:00:03.802) 0:00:10.617 ******** 2026-01-05 00:36:06.091104 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:06.091122 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:06.091139 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:06.091157 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:06.091173 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:06.091189 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:06.091206 | orchestrator | 2026-01-05 00:36:06.091224 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-05 00:36:06.091242 | orchestrator | 2026-01-05 00:36:06.091259 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-05 00:36:06.091277 | orchestrator | Monday 05 January 2026 00:35:55 +0000 (0:00:00.787) 0:00:11.405 ******** 2026-01-05 00:36:06.091296 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:06.091313 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:06.091331 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:06.091366 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:06.091385 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:06.091402 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:06.091420 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:06.091437 | orchestrator | 2026-01-05 00:36:06.091455 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-05 00:36:06.091473 | orchestrator | Monday 05 January 2026 00:35:57 +0000 (0:00:01.533) 0:00:12.938 ******** 2026-01-05 00:36:06.091490 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:06.091509 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:06.091527 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:06.091546 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:06.091565 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:06.091583 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:06.091633 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:06.091654 | orchestrator | 2026-01-05 00:36:06.091668 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-05 00:36:06.091679 | orchestrator | Monday 05 January 2026 00:35:59 +0000 (0:00:01.613) 0:00:14.552 ******** 2026-01-05 00:36:06.091690 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:06.091701 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:06.091711 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:06.091722 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:06.091733 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:06.091743 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:06.091754 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:06.091821 | orchestrator | 2026-01-05 00:36:06.091832 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-05 00:36:06.091843 | orchestrator | Monday 05 January 2026 00:36:00 +0000 (0:00:01.582) 0:00:16.135 ******** 2026-01-05 00:36:06.091854 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:06.091864 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:06.091875 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:06.091886 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:06.091897 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:06.091907 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:06.091918 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:06.091928 | orchestrator | 2026-01-05 00:36:06.091939 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-05 00:36:06.091950 | orchestrator | Monday 05 January 2026 00:36:02 +0000 (0:00:01.920) 0:00:18.055 ******** 2026-01-05 00:36:06.091961 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:06.091972 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:06.091982 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:06.091993 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:06.092004 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:06.092014 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:06.092025 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:06.092035 | orchestrator | 2026-01-05 00:36:06.092046 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-05 00:36:06.092057 | orchestrator | 2026-01-05 00:36:06.092068 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-05 00:36:06.092079 | orchestrator | Monday 05 January 2026 00:36:03 +0000 (0:00:00.693) 0:00:18.748 ******** 2026-01-05 00:36:06.092089 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:06.092100 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:06.092111 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:06.092121 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:06.092132 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:06.092142 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:06.092153 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:06.092164 | orchestrator | 2026-01-05 00:36:06.092174 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:36:06.092187 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:36:06.092221 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:06.092232 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:06.092243 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:06.092254 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:06.092265 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:06.092275 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:06.092286 | orchestrator | 2026-01-05 00:36:06.092297 | orchestrator | 2026-01-05 00:36:06.092308 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:36:06.092319 | orchestrator | Monday 05 January 2026 00:36:06 +0000 (0:00:02.821) 0:00:21.569 ******** 2026-01-05 00:36:06.092330 | orchestrator | =============================================================================== 2026-01-05 00:36:06.092341 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.80s 2026-01-05 00:36:06.092351 | orchestrator | Install python3-docker -------------------------------------------------- 2.82s 2026-01-05 00:36:06.092362 | orchestrator | Apply netplan configuration --------------------------------------------- 2.49s 2026-01-05 00:36:06.092373 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.92s 2026-01-05 00:36:06.092383 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2026-01-05 00:36:06.092394 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2026-01-05 00:36:06.092404 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.58s 2026-01-05 00:36:06.092415 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2026-01-05 00:36:06.092426 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.53s 2026-01-05 00:36:06.092436 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2026-01-05 00:36:06.092447 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.79s 2026-01-05 00:36:06.092466 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.69s 2026-01-05 00:36:06.813292 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-05 00:36:19.014656 | orchestrator | 2026-01-05 00:36:19 | INFO  | Task 38997cb6-3969-4cc3-a36a-44de56183efa (reboot) was prepared for execution. 2026-01-05 00:36:19.014834 | orchestrator | 2026-01-05 00:36:19 | INFO  | It takes a moment until task 38997cb6-3969-4cc3-a36a-44de56183efa (reboot) has been started and output is visible here. 2026-01-05 00:36:29.451462 | orchestrator | 2026-01-05 00:36:29.451580 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:36:29.451595 | orchestrator | 2026-01-05 00:36:29.451606 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:36:29.451616 | orchestrator | Monday 05 January 2026 00:36:23 +0000 (0:00:00.205) 0:00:00.205 ******** 2026-01-05 00:36:29.451626 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:29.451638 | orchestrator | 2026-01-05 00:36:29.451647 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:36:29.451657 | orchestrator | Monday 05 January 2026 00:36:23 +0000 (0:00:00.102) 0:00:00.307 ******** 2026-01-05 00:36:29.451690 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:29.451701 | orchestrator | 2026-01-05 00:36:29.451711 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:36:29.451721 | orchestrator | Monday 05 January 2026 00:36:24 +0000 (0:00:00.941) 0:00:01.248 ******** 2026-01-05 00:36:29.451789 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:29.451800 | orchestrator | 2026-01-05 00:36:29.451810 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:36:29.451820 | orchestrator | 2026-01-05 00:36:29.451830 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:36:29.451839 | orchestrator | Monday 05 January 2026 00:36:24 +0000 (0:00:00.120) 0:00:01.369 ******** 2026-01-05 00:36:29.451849 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:29.451858 | orchestrator | 2026-01-05 00:36:29.451868 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:36:29.451878 | orchestrator | Monday 05 January 2026 00:36:24 +0000 (0:00:00.111) 0:00:01.480 ******** 2026-01-05 00:36:29.451887 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:29.451897 | orchestrator | 2026-01-05 00:36:29.451906 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:36:29.451916 | orchestrator | Monday 05 January 2026 00:36:25 +0000 (0:00:00.696) 0:00:02.176 ******** 2026-01-05 00:36:29.451929 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:29.451946 | orchestrator | 2026-01-05 00:36:29.451962 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:36:29.451978 | orchestrator | 2026-01-05 00:36:29.451994 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:36:29.452010 | orchestrator | Monday 05 January 2026 00:36:25 +0000 (0:00:00.129) 0:00:02.306 ******** 2026-01-05 00:36:29.452026 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:29.452042 | orchestrator | 2026-01-05 00:36:29.452079 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:36:29.452096 | orchestrator | Monday 05 January 2026 00:36:25 +0000 (0:00:00.211) 0:00:02.517 ******** 2026-01-05 00:36:29.452114 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:29.452130 | orchestrator | 2026-01-05 00:36:29.452146 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:36:29.452164 | orchestrator | Monday 05 January 2026 00:36:26 +0000 (0:00:00.659) 0:00:03.177 ******** 2026-01-05 00:36:29.452180 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:29.452191 | orchestrator | 2026-01-05 00:36:29.452203 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:36:29.452214 | orchestrator | 2026-01-05 00:36:29.452225 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:36:29.452236 | orchestrator | Monday 05 January 2026 00:36:26 +0000 (0:00:00.122) 0:00:03.299 ******** 2026-01-05 00:36:29.452247 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:29.452258 | orchestrator | 2026-01-05 00:36:29.452269 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:36:29.452280 | orchestrator | Monday 05 January 2026 00:36:26 +0000 (0:00:00.104) 0:00:03.403 ******** 2026-01-05 00:36:29.452291 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:29.452302 | orchestrator | 2026-01-05 00:36:29.452313 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:36:29.452324 | orchestrator | Monday 05 January 2026 00:36:27 +0000 (0:00:00.690) 0:00:04.094 ******** 2026-01-05 00:36:29.452335 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:29.452346 | orchestrator | 2026-01-05 00:36:29.452356 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:36:29.452365 | orchestrator | 2026-01-05 00:36:29.452375 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:36:29.452384 | orchestrator | Monday 05 January 2026 00:36:27 +0000 (0:00:00.135) 0:00:04.229 ******** 2026-01-05 00:36:29.452404 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:29.452414 | orchestrator | 2026-01-05 00:36:29.452423 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:36:29.452433 | orchestrator | Monday 05 January 2026 00:36:27 +0000 (0:00:00.095) 0:00:04.325 ******** 2026-01-05 00:36:29.452442 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:29.452452 | orchestrator | 2026-01-05 00:36:29.452462 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:36:29.452471 | orchestrator | Monday 05 January 2026 00:36:28 +0000 (0:00:00.643) 0:00:04.968 ******** 2026-01-05 00:36:29.452481 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:29.452490 | orchestrator | 2026-01-05 00:36:29.452500 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:36:29.452509 | orchestrator | 2026-01-05 00:36:29.452519 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:36:29.452528 | orchestrator | Monday 05 January 2026 00:36:28 +0000 (0:00:00.105) 0:00:05.074 ******** 2026-01-05 00:36:29.452538 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:29.452547 | orchestrator | 2026-01-05 00:36:29.452557 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:36:29.452566 | orchestrator | Monday 05 January 2026 00:36:28 +0000 (0:00:00.100) 0:00:05.174 ******** 2026-01-05 00:36:29.452575 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:29.452585 | orchestrator | 2026-01-05 00:36:29.452594 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:36:29.452604 | orchestrator | Monday 05 January 2026 00:36:29 +0000 (0:00:00.683) 0:00:05.858 ******** 2026-01-05 00:36:29.452632 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:29.452642 | orchestrator | 2026-01-05 00:36:29.452652 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:36:29.452663 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:29.452675 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:29.452684 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:29.452694 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:29.452704 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:29.452720 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:36:29.452756 | orchestrator | 2026-01-05 00:36:29.452773 | orchestrator | 2026-01-05 00:36:29.452789 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:36:29.452805 | orchestrator | Monday 05 January 2026 00:36:29 +0000 (0:00:00.044) 0:00:05.902 ******** 2026-01-05 00:36:29.452821 | orchestrator | =============================================================================== 2026-01-05 00:36:29.452838 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.31s 2026-01-05 00:36:29.452850 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.72s 2026-01-05 00:36:29.452859 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2026-01-05 00:36:29.776656 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-05 00:36:42.068240 | orchestrator | 2026-01-05 00:36:42 | INFO  | Task 60e9b284-d9cf-44c3-9f7d-72a1353fa19d (wait-for-connection) was prepared for execution. 2026-01-05 00:36:42.068425 | orchestrator | 2026-01-05 00:36:42 | INFO  | It takes a moment until task 60e9b284-d9cf-44c3-9f7d-72a1353fa19d (wait-for-connection) has been started and output is visible here. 2026-01-05 00:36:58.577134 | orchestrator | 2026-01-05 00:36:58.577244 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-05 00:36:58.577257 | orchestrator | 2026-01-05 00:36:58.577267 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-05 00:36:58.577276 | orchestrator | Monday 05 January 2026 00:36:46 +0000 (0:00:00.251) 0:00:00.251 ******** 2026-01-05 00:36:58.577285 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:58.577294 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:58.577302 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:58.577310 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:58.577318 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:58.577326 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:58.577333 | orchestrator | 2026-01-05 00:36:58.577342 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:36:58.577350 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:58.577361 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:58.577369 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:58.577377 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:58.577385 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:58.577394 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:58.577401 | orchestrator | 2026-01-05 00:36:58.577409 | orchestrator | 2026-01-05 00:36:58.577417 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:36:58.577425 | orchestrator | Monday 05 January 2026 00:36:58 +0000 (0:00:11.630) 0:00:11.882 ******** 2026-01-05 00:36:58.577433 | orchestrator | =============================================================================== 2026-01-05 00:36:58.577442 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2026-01-05 00:36:58.931854 | orchestrator | + osism apply hddtemp 2026-01-05 00:37:11.137082 | orchestrator | 2026-01-05 00:37:11 | INFO  | Task 38d82a55-649d-4d3c-a64f-5b895e018fc1 (hddtemp) was prepared for execution. 2026-01-05 00:37:11.137203 | orchestrator | 2026-01-05 00:37:11 | INFO  | It takes a moment until task 38d82a55-649d-4d3c-a64f-5b895e018fc1 (hddtemp) has been started and output is visible here. 2026-01-05 00:37:39.529516 | orchestrator | 2026-01-05 00:37:39.529638 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-05 00:37:39.529658 | orchestrator | 2026-01-05 00:37:39.529672 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-05 00:37:39.529757 | orchestrator | Monday 05 January 2026 00:37:15 +0000 (0:00:00.287) 0:00:00.287 ******** 2026-01-05 00:37:39.529783 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:39.529796 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:39.529807 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:39.529818 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:39.529829 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:39.529840 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:39.529850 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:39.529861 | orchestrator | 2026-01-05 00:37:39.529873 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-05 00:37:39.529884 | orchestrator | Monday 05 January 2026 00:37:16 +0000 (0:00:00.723) 0:00:01.011 ******** 2026-01-05 00:37:39.529924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:37:39.529938 | orchestrator | 2026-01-05 00:37:39.529949 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-05 00:37:39.529960 | orchestrator | Monday 05 January 2026 00:37:17 +0000 (0:00:01.241) 0:00:02.253 ******** 2026-01-05 00:37:39.529971 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:39.529983 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:39.529994 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:39.530004 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:39.530081 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:39.530095 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:39.530108 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:39.530120 | orchestrator | 2026-01-05 00:37:39.530132 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-05 00:37:39.530144 | orchestrator | Monday 05 January 2026 00:37:19 +0000 (0:00:02.045) 0:00:04.298 ******** 2026-01-05 00:37:39.530157 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:39.530171 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:39.530183 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:39.530211 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:39.530224 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:39.530236 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:39.530260 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:39.530273 | orchestrator | 2026-01-05 00:37:39.530286 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-05 00:37:39.530316 | orchestrator | Monday 05 January 2026 00:37:20 +0000 (0:00:01.223) 0:00:05.521 ******** 2026-01-05 00:37:39.530328 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:39.530341 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:39.530354 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:39.530367 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:39.530379 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:39.530391 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:39.530402 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:39.530412 | orchestrator | 2026-01-05 00:37:39.530423 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-05 00:37:39.530435 | orchestrator | Monday 05 January 2026 00:37:22 +0000 (0:00:01.250) 0:00:06.771 ******** 2026-01-05 00:37:39.530446 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:37:39.530456 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:37:39.530467 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:39.530478 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:37:39.530489 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:37:39.530500 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:37:39.530510 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:37:39.530521 | orchestrator | 2026-01-05 00:37:39.530532 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-05 00:37:39.530542 | orchestrator | Monday 05 January 2026 00:37:22 +0000 (0:00:00.828) 0:00:07.600 ******** 2026-01-05 00:37:39.530553 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:39.530564 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:39.530575 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:39.530585 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:39.530596 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:39.530606 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:39.530617 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:39.530628 | orchestrator | 2026-01-05 00:37:39.530639 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-05 00:37:39.530650 | orchestrator | Monday 05 January 2026 00:37:35 +0000 (0:00:12.946) 0:00:20.546 ******** 2026-01-05 00:37:39.530661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:37:39.530707 | orchestrator | 2026-01-05 00:37:39.530720 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-05 00:37:39.530731 | orchestrator | Monday 05 January 2026 00:37:37 +0000 (0:00:01.268) 0:00:21.814 ******** 2026-01-05 00:37:39.530742 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:39.530753 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:39.530763 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:39.530774 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:39.530785 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:39.530795 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:39.530806 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:39.530817 | orchestrator | 2026-01-05 00:37:39.530827 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:37:39.530839 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:39.530874 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:37:39.530886 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:37:39.530898 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:37:39.530909 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:37:39.530919 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:37:39.530930 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:37:39.530941 | orchestrator | 2026-01-05 00:37:39.530952 | orchestrator | 2026-01-05 00:37:39.530963 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:37:39.530974 | orchestrator | Monday 05 January 2026 00:37:39 +0000 (0:00:01.950) 0:00:23.764 ******** 2026-01-05 00:37:39.530985 | orchestrator | =============================================================================== 2026-01-05 00:37:39.530995 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.95s 2026-01-05 00:37:39.531006 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.05s 2026-01-05 00:37:39.531017 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.95s 2026-01-05 00:37:39.531027 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.27s 2026-01-05 00:37:39.531038 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.25s 2026-01-05 00:37:39.531049 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.24s 2026-01-05 00:37:39.531060 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2026-01-05 00:37:39.531070 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2026-01-05 00:37:39.531088 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2026-01-05 00:37:39.858619 | orchestrator | ++ semver 9.5.0 7.1.1 2026-01-05 00:37:39.905345 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 00:37:39.905443 | orchestrator | + sudo systemctl restart manager.service 2026-01-05 00:37:53.627326 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 00:37:53.627428 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-05 00:37:53.627460 | orchestrator | + local max_attempts=60 2026-01-05 00:37:53.627468 | orchestrator | + local name=ceph-ansible 2026-01-05 00:37:53.627475 | orchestrator | + local attempt_num=1 2026-01-05 00:37:53.627482 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:37:53.660460 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:37:53.660526 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:37:53.660534 | orchestrator | + sleep 5 2026-01-05 00:37:58.664171 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:37:58.695469 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:37:58.695564 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:37:58.695578 | orchestrator | + sleep 5 2026-01-05 00:38:03.699042 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:03.738317 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:03.738424 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:03.738439 | orchestrator | + sleep 5 2026-01-05 00:38:08.742652 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:08.783002 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:08.783088 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:08.783097 | orchestrator | + sleep 5 2026-01-05 00:38:13.788759 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:13.829488 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:13.829605 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:13.829619 | orchestrator | + sleep 5 2026-01-05 00:38:18.833758 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:18.880742 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:18.880857 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:18.880874 | orchestrator | + sleep 5 2026-01-05 00:38:23.886209 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:23.920496 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:23.920604 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:23.920620 | orchestrator | + sleep 5 2026-01-05 00:38:28.923717 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:28.958544 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:28.958634 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:28.958643 | orchestrator | + sleep 5 2026-01-05 00:38:33.962426 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:33.995996 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:33.996124 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:33.996151 | orchestrator | + sleep 5 2026-01-05 00:38:38.999989 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:39.047236 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:39.047343 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:39.047358 | orchestrator | + sleep 5 2026-01-05 00:38:44.053539 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:44.096540 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:44.096766 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:44.096800 | orchestrator | + sleep 5 2026-01-05 00:38:49.102267 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:49.148223 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:49.148350 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:49.148376 | orchestrator | + sleep 5 2026-01-05 00:38:54.153395 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:54.189763 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:54.189869 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:38:54.189884 | orchestrator | + sleep 5 2026-01-05 00:38:59.193626 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:38:59.235275 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:59.235348 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-05 00:38:59.235357 | orchestrator | + local max_attempts=60 2026-01-05 00:38:59.235364 | orchestrator | + local name=kolla-ansible 2026-01-05 00:38:59.235370 | orchestrator | + local attempt_num=1 2026-01-05 00:38:59.235384 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-05 00:38:59.272941 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:59.273030 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-05 00:38:59.273043 | orchestrator | + local max_attempts=60 2026-01-05 00:38:59.273052 | orchestrator | + local name=osism-ansible 2026-01-05 00:38:59.273060 | orchestrator | + local attempt_num=1 2026-01-05 00:38:59.273470 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-05 00:38:59.312794 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:38:59.312901 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-05 00:38:59.312917 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-05 00:38:59.482339 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-05 00:38:59.641450 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-05 00:38:59.775985 | orchestrator | ARA in osism-ansible already disabled. 2026-01-05 00:38:59.934248 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-05 00:38:59.935399 | orchestrator | + osism apply gather-facts 2026-01-05 00:39:12.300011 | orchestrator | 2026-01-05 00:39:12 | INFO  | Task 6b450541-3508-4d4e-9189-75776cac8e4b (gather-facts) was prepared for execution. 2026-01-05 00:39:12.300156 | orchestrator | 2026-01-05 00:39:12 | INFO  | It takes a moment until task 6b450541-3508-4d4e-9189-75776cac8e4b (gather-facts) has been started and output is visible here. 2026-01-05 00:39:25.717412 | orchestrator | 2026-01-05 00:39:25.717539 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:39:25.717555 | orchestrator | 2026-01-05 00:39:25.717567 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:39:25.717578 | orchestrator | Monday 05 January 2026 00:39:16 +0000 (0:00:00.223) 0:00:00.223 ******** 2026-01-05 00:39:25.717588 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:39:25.717599 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:39:25.717610 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:39:25.717673 | orchestrator | ok: [testbed-manager] 2026-01-05 00:39:25.717685 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:39:25.717694 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:39:25.717704 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:39:25.717714 | orchestrator | 2026-01-05 00:39:25.717724 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:39:25.717734 | orchestrator | 2026-01-05 00:39:25.717744 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:39:25.717754 | orchestrator | Monday 05 January 2026 00:39:24 +0000 (0:00:08.384) 0:00:08.608 ******** 2026-01-05 00:39:25.717764 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:39:25.717775 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:39:25.717785 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:39:25.717795 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:39:25.717804 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:39:25.717814 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:39:25.717824 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:39:25.717833 | orchestrator | 2026-01-05 00:39:25.717843 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:39:25.717853 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:39:25.717865 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:39:25.717875 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:39:25.717884 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:39:25.717894 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:39:25.717929 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:39:25.717940 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:39:25.717952 | orchestrator | 2026-01-05 00:39:25.717963 | orchestrator | 2026-01-05 00:39:25.717976 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:39:25.717988 | orchestrator | Monday 05 January 2026 00:39:25 +0000 (0:00:00.585) 0:00:09.193 ******** 2026-01-05 00:39:25.717999 | orchestrator | =============================================================================== 2026-01-05 00:39:25.718009 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.39s 2026-01-05 00:39:25.718078 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-01-05 00:39:26.065802 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-05 00:39:26.079105 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-05 00:39:26.099791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-05 00:39:26.115257 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-05 00:39:26.129212 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-05 00:39:26.147396 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-05 00:39:26.162648 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-05 00:39:26.175079 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-05 00:39:26.196681 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-05 00:39:26.213727 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-05 00:39:26.232577 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-05 00:39:26.250205 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-05 00:39:26.264328 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-05 00:39:26.279416 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-05 00:39:26.299669 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-05 00:39:26.316803 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-05 00:39:26.338861 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-05 00:39:26.355082 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-05 00:39:26.369102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-05 00:39:26.383300 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-05 00:39:26.396759 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-05 00:39:26.820235 | orchestrator | ok: Runtime: 0:24:32.010641 2026-01-05 00:39:26.936444 | 2026-01-05 00:39:26.936616 | TASK [Deploy services] 2026-01-05 00:39:27.475085 | orchestrator | skipping: Conditional result was False 2026-01-05 00:39:27.484369 | 2026-01-05 00:39:27.484511 | TASK [Deploy in a nutshell] 2026-01-05 00:39:28.200789 | orchestrator | 2026-01-05 00:39:28.200994 | orchestrator | # PULL IMAGES 2026-01-05 00:39:28.201016 | orchestrator | 2026-01-05 00:39:28.201031 | orchestrator | + set -e 2026-01-05 00:39:28.201049 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:39:28.201071 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:39:28.201133 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:39:28.201181 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:39:28.201205 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:39:28.201219 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:39:28.201231 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:39:28.201249 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:39:28.201261 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:39:28.201279 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:39:28.201291 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:39:28.201310 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:39:28.201321 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 00:39:28.201335 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 00:39:28.201346 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:39:28.201359 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:39:28.201370 | orchestrator | ++ export ARA=false 2026-01-05 00:39:28.201381 | orchestrator | ++ ARA=false 2026-01-05 00:39:28.201392 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:39:28.201404 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:39:28.201414 | orchestrator | ++ export TEMPEST=true 2026-01-05 00:39:28.201425 | orchestrator | ++ TEMPEST=true 2026-01-05 00:39:28.201436 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:39:28.201447 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:39:28.201457 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 00:39:28.201469 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 00:39:28.201480 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:39:28.201490 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:39:28.201501 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:39:28.201512 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:39:28.201523 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:39:28.201534 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:39:28.201545 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:39:28.201563 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:39:28.201574 | orchestrator | + echo 2026-01-05 00:39:28.201585 | orchestrator | + echo '# PULL IMAGES' 2026-01-05 00:39:28.201596 | orchestrator | + echo 2026-01-05 00:39:28.201656 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-05 00:39:28.261227 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 00:39:28.261329 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-05 00:39:30.228465 | orchestrator | 2026-01-05 00:39:30 | INFO  | Trying to run play pull-images in environment custom 2026-01-05 00:39:40.357258 | orchestrator | 2026-01-05 00:39:40 | INFO  | Task f714c52b-97d3-4888-b41b-24267c858b0b (pull-images) was prepared for execution. 2026-01-05 00:39:40.357386 | orchestrator | 2026-01-05 00:39:40 | INFO  | Task f714c52b-97d3-4888-b41b-24267c858b0b is running in background. No more output. Check ARA for logs. 2026-01-05 00:39:42.680103 | orchestrator | 2026-01-05 00:39:42 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-05 00:39:52.810961 | orchestrator | 2026-01-05 00:39:52 | INFO  | Task f97c69c2-e1be-4c10-b008-67b97280bac3 (wipe-partitions) was prepared for execution. 2026-01-05 00:39:52.811100 | orchestrator | 2026-01-05 00:39:52 | INFO  | It takes a moment until task f97c69c2-e1be-4c10-b008-67b97280bac3 (wipe-partitions) has been started and output is visible here. 2026-01-05 00:40:05.691226 | orchestrator | 2026-01-05 00:40:05.691345 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-05 00:40:05.691358 | orchestrator | 2026-01-05 00:40:05.691366 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-05 00:40:05.691379 | orchestrator | Monday 05 January 2026 00:39:57 +0000 (0:00:00.128) 0:00:00.128 ******** 2026-01-05 00:40:05.691388 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:05.691396 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:05.691404 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:05.691411 | orchestrator | 2026-01-05 00:40:05.691420 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-05 00:40:05.691455 | orchestrator | Monday 05 January 2026 00:39:57 +0000 (0:00:00.600) 0:00:00.729 ******** 2026-01-05 00:40:05.691464 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:05.691472 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:05.691480 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:05.691493 | orchestrator | 2026-01-05 00:40:05.691501 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-05 00:40:05.691510 | orchestrator | Monday 05 January 2026 00:39:58 +0000 (0:00:00.392) 0:00:01.122 ******** 2026-01-05 00:40:05.691518 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:05.691527 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:05.691536 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:05.691544 | orchestrator | 2026-01-05 00:40:05.691552 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-05 00:40:05.691560 | orchestrator | Monday 05 January 2026 00:39:58 +0000 (0:00:00.574) 0:00:01.697 ******** 2026-01-05 00:40:05.691568 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:05.691577 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:05.691585 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:05.691593 | orchestrator | 2026-01-05 00:40:05.691641 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-05 00:40:05.691650 | orchestrator | Monday 05 January 2026 00:39:59 +0000 (0:00:00.289) 0:00:01.986 ******** 2026-01-05 00:40:05.691657 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 00:40:05.691668 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 00:40:05.691674 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 00:40:05.691681 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 00:40:05.691689 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 00:40:05.691696 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 00:40:05.691704 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 00:40:05.691711 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 00:40:05.691718 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 00:40:05.691726 | orchestrator | 2026-01-05 00:40:05.691734 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-05 00:40:05.691741 | orchestrator | Monday 05 January 2026 00:40:00 +0000 (0:00:01.201) 0:00:03.188 ******** 2026-01-05 00:40:05.691750 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 00:40:05.691757 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 00:40:05.691764 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 00:40:05.691771 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 00:40:05.691777 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 00:40:05.691784 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 00:40:05.691790 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 00:40:05.691797 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 00:40:05.691804 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 00:40:05.691811 | orchestrator | 2026-01-05 00:40:05.691817 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-05 00:40:05.691825 | orchestrator | Monday 05 January 2026 00:40:01 +0000 (0:00:01.572) 0:00:04.760 ******** 2026-01-05 00:40:05.691832 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 00:40:05.691839 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 00:40:05.691846 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 00:40:05.691852 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 00:40:05.691858 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 00:40:05.691864 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 00:40:05.691871 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 00:40:05.691884 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 00:40:05.691899 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 00:40:05.691906 | orchestrator | 2026-01-05 00:40:05.691913 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-05 00:40:05.691921 | orchestrator | Monday 05 January 2026 00:40:04 +0000 (0:00:02.152) 0:00:06.913 ******** 2026-01-05 00:40:05.691928 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:05.691935 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:05.691942 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:05.691948 | orchestrator | 2026-01-05 00:40:05.691955 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-05 00:40:05.691962 | orchestrator | Monday 05 January 2026 00:40:04 +0000 (0:00:00.626) 0:00:07.539 ******** 2026-01-05 00:40:05.691969 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:05.691975 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:05.691982 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:05.691988 | orchestrator | 2026-01-05 00:40:05.691995 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:40:05.692002 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:05.692011 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:05.692035 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:05.692042 | orchestrator | 2026-01-05 00:40:05.692047 | orchestrator | 2026-01-05 00:40:05.692053 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:40:05.692059 | orchestrator | Monday 05 January 2026 00:40:05 +0000 (0:00:00.658) 0:00:08.198 ******** 2026-01-05 00:40:05.692064 | orchestrator | =============================================================================== 2026-01-05 00:40:05.692070 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.15s 2026-01-05 00:40:05.692076 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-01-05 00:40:05.692081 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2026-01-05 00:40:05.692087 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-01-05 00:40:05.692093 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-01-05 00:40:05.692098 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2026-01-05 00:40:05.692104 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-01-05 00:40:05.692111 | orchestrator | Remove all rook related logical devices --------------------------------- 0.39s 2026-01-05 00:40:05.692117 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2026-01-05 00:40:18.112207 | orchestrator | 2026-01-05 00:40:18 | INFO  | Task 3705d3f4-27fa-4d59-ad63-db1a73d8c3f4 (facts) was prepared for execution. 2026-01-05 00:40:18.112347 | orchestrator | 2026-01-05 00:40:18 | INFO  | It takes a moment until task 3705d3f4-27fa-4d59-ad63-db1a73d8c3f4 (facts) has been started and output is visible here. 2026-01-05 00:40:31.396944 | orchestrator | 2026-01-05 00:40:31.397068 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 00:40:31.397085 | orchestrator | 2026-01-05 00:40:31.397098 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:40:31.397109 | orchestrator | Monday 05 January 2026 00:40:22 +0000 (0:00:00.293) 0:00:00.293 ******** 2026-01-05 00:40:31.397121 | orchestrator | ok: [testbed-manager] 2026-01-05 00:40:31.397133 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:40:31.397144 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:40:31.397155 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:40:31.397246 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:31.397288 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:31.397300 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:31.397310 | orchestrator | 2026-01-05 00:40:31.397321 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:40:31.397332 | orchestrator | Monday 05 January 2026 00:40:23 +0000 (0:00:01.114) 0:00:01.408 ******** 2026-01-05 00:40:31.397343 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:40:31.397355 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:40:31.397369 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:40:31.397380 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:40:31.397390 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:31.397401 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:31.397412 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:31.397423 | orchestrator | 2026-01-05 00:40:31.397434 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:40:31.397444 | orchestrator | 2026-01-05 00:40:31.397455 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:40:31.397468 | orchestrator | Monday 05 January 2026 00:40:24 +0000 (0:00:01.197) 0:00:02.606 ******** 2026-01-05 00:40:31.397481 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:40:31.397493 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:40:31.397506 | orchestrator | ok: [testbed-manager] 2026-01-05 00:40:31.397520 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:40:31.397532 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:31.397544 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:31.397557 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:31.397569 | orchestrator | 2026-01-05 00:40:31.397582 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:40:31.397628 | orchestrator | 2026-01-05 00:40:31.397647 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:40:31.397666 | orchestrator | Monday 05 January 2026 00:40:30 +0000 (0:00:05.571) 0:00:08.178 ******** 2026-01-05 00:40:31.397678 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:40:31.397691 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:40:31.397704 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:40:31.397717 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:40:31.397748 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:31.397760 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:31.397773 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:31.397785 | orchestrator | 2026-01-05 00:40:31.397797 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:40:31.397811 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:31.397825 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:31.397836 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:31.397847 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:31.397858 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:31.397869 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:31.397880 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:31.397890 | orchestrator | 2026-01-05 00:40:31.397901 | orchestrator | 2026-01-05 00:40:31.397912 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:40:31.397938 | orchestrator | Monday 05 January 2026 00:40:30 +0000 (0:00:00.541) 0:00:08.719 ******** 2026-01-05 00:40:31.397949 | orchestrator | =============================================================================== 2026-01-05 00:40:31.397959 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.57s 2026-01-05 00:40:31.397970 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2026-01-05 00:40:31.397981 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2026-01-05 00:40:31.397991 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-01-05 00:40:33.563011 | orchestrator | 2026-01-05 00:40:33 | INFO  | Task 5f369b58-f131-47b6-a220-ac6751de3655 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-05 00:40:33.563120 | orchestrator | 2026-01-05 00:40:33 | INFO  | It takes a moment until task 5f369b58-f131-47b6-a220-ac6751de3655 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-05 00:40:44.848305 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:40:44.848441 | orchestrator | 2.16.14 2026-01-05 00:40:44.848460 | orchestrator | 2026-01-05 00:40:44.848473 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 00:40:44.848486 | orchestrator | 2026-01-05 00:40:44.848497 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:40:44.848509 | orchestrator | Monday 05 January 2026 00:40:37 +0000 (0:00:00.325) 0:00:00.326 ******** 2026-01-05 00:40:44.848523 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 00:40:44.848534 | orchestrator | 2026-01-05 00:40:44.848545 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:40:44.848556 | orchestrator | Monday 05 January 2026 00:40:37 +0000 (0:00:00.227) 0:00:00.553 ******** 2026-01-05 00:40:44.848567 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:44.848578 | orchestrator | 2026-01-05 00:40:44.848693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.848705 | orchestrator | Monday 05 January 2026 00:40:37 +0000 (0:00:00.217) 0:00:00.771 ******** 2026-01-05 00:40:44.848716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:40:44.848728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:40:44.848739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:40:44.848750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:40:44.848761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:40:44.848772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:40:44.848783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:40:44.848794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:40:44.848807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-05 00:40:44.848820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:40:44.848832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:40:44.848845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:40:44.848867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:40:44.848881 | orchestrator | 2026-01-05 00:40:44.848894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.848907 | orchestrator | Monday 05 January 2026 00:40:38 +0000 (0:00:00.410) 0:00:01.181 ******** 2026-01-05 00:40:44.848960 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.848974 | orchestrator | 2026-01-05 00:40:44.848987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.848999 | orchestrator | Monday 05 January 2026 00:40:38 +0000 (0:00:00.184) 0:00:01.366 ******** 2026-01-05 00:40:44.849012 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849024 | orchestrator | 2026-01-05 00:40:44.849036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849049 | orchestrator | Monday 05 January 2026 00:40:38 +0000 (0:00:00.176) 0:00:01.542 ******** 2026-01-05 00:40:44.849061 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849073 | orchestrator | 2026-01-05 00:40:44.849085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849097 | orchestrator | Monday 05 January 2026 00:40:38 +0000 (0:00:00.198) 0:00:01.741 ******** 2026-01-05 00:40:44.849114 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849126 | orchestrator | 2026-01-05 00:40:44.849140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849152 | orchestrator | Monday 05 January 2026 00:40:39 +0000 (0:00:00.221) 0:00:01.963 ******** 2026-01-05 00:40:44.849165 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849179 | orchestrator | 2026-01-05 00:40:44.849190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849200 | orchestrator | Monday 05 January 2026 00:40:39 +0000 (0:00:00.201) 0:00:02.165 ******** 2026-01-05 00:40:44.849211 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849222 | orchestrator | 2026-01-05 00:40:44.849232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849243 | orchestrator | Monday 05 January 2026 00:40:39 +0000 (0:00:00.223) 0:00:02.388 ******** 2026-01-05 00:40:44.849254 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849264 | orchestrator | 2026-01-05 00:40:44.849275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849286 | orchestrator | Monday 05 January 2026 00:40:39 +0000 (0:00:00.213) 0:00:02.601 ******** 2026-01-05 00:40:44.849296 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849307 | orchestrator | 2026-01-05 00:40:44.849318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849329 | orchestrator | Monday 05 January 2026 00:40:39 +0000 (0:00:00.262) 0:00:02.864 ******** 2026-01-05 00:40:44.849340 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11) 2026-01-05 00:40:44.849352 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11) 2026-01-05 00:40:44.849362 | orchestrator | 2026-01-05 00:40:44.849373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849403 | orchestrator | Monday 05 January 2026 00:40:40 +0000 (0:00:00.439) 0:00:03.304 ******** 2026-01-05 00:40:44.849415 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4) 2026-01-05 00:40:44.849425 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4) 2026-01-05 00:40:44.849436 | orchestrator | 2026-01-05 00:40:44.849447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849458 | orchestrator | Monday 05 January 2026 00:40:41 +0000 (0:00:00.646) 0:00:03.951 ******** 2026-01-05 00:40:44.849468 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392) 2026-01-05 00:40:44.849479 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392) 2026-01-05 00:40:44.849490 | orchestrator | 2026-01-05 00:40:44.849501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849511 | orchestrator | Monday 05 January 2026 00:40:41 +0000 (0:00:00.683) 0:00:04.634 ******** 2026-01-05 00:40:44.849531 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0) 2026-01-05 00:40:44.849543 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0) 2026-01-05 00:40:44.849554 | orchestrator | 2026-01-05 00:40:44.849564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:40:44.849575 | orchestrator | Monday 05 January 2026 00:40:42 +0000 (0:00:00.904) 0:00:05.538 ******** 2026-01-05 00:40:44.849603 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:40:44.849614 | orchestrator | 2026-01-05 00:40:44.849625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:44.849636 | orchestrator | Monday 05 January 2026 00:40:43 +0000 (0:00:00.351) 0:00:05.890 ******** 2026-01-05 00:40:44.849652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:40:44.849663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:40:44.849674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:40:44.849684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:40:44.849695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:40:44.849706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:40:44.849716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:40:44.849727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:40:44.849737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-05 00:40:44.849748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:40:44.849759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:40:44.849769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:40:44.849780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:40:44.849791 | orchestrator | 2026-01-05 00:40:44.849802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:44.849812 | orchestrator | Monday 05 January 2026 00:40:43 +0000 (0:00:00.391) 0:00:06.282 ******** 2026-01-05 00:40:44.849823 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849834 | orchestrator | 2026-01-05 00:40:44.849845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:44.849855 | orchestrator | Monday 05 January 2026 00:40:43 +0000 (0:00:00.221) 0:00:06.503 ******** 2026-01-05 00:40:44.849866 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849877 | orchestrator | 2026-01-05 00:40:44.849888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:44.849899 | orchestrator | Monday 05 January 2026 00:40:43 +0000 (0:00:00.194) 0:00:06.698 ******** 2026-01-05 00:40:44.849909 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849920 | orchestrator | 2026-01-05 00:40:44.849930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:44.849941 | orchestrator | Monday 05 January 2026 00:40:44 +0000 (0:00:00.200) 0:00:06.899 ******** 2026-01-05 00:40:44.849952 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.849962 | orchestrator | 2026-01-05 00:40:44.849973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:44.849984 | orchestrator | Monday 05 January 2026 00:40:44 +0000 (0:00:00.200) 0:00:07.100 ******** 2026-01-05 00:40:44.849995 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.850094 | orchestrator | 2026-01-05 00:40:44.850109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:44.850120 | orchestrator | Monday 05 January 2026 00:40:44 +0000 (0:00:00.201) 0:00:07.301 ******** 2026-01-05 00:40:44.850131 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.850142 | orchestrator | 2026-01-05 00:40:44.850152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:44.850163 | orchestrator | Monday 05 January 2026 00:40:44 +0000 (0:00:00.209) 0:00:07.511 ******** 2026-01-05 00:40:44.850174 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:44.850184 | orchestrator | 2026-01-05 00:40:44.850204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:52.474984 | orchestrator | Monday 05 January 2026 00:40:44 +0000 (0:00:00.204) 0:00:07.716 ******** 2026-01-05 00:40:52.475130 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.475151 | orchestrator | 2026-01-05 00:40:52.475166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:52.475178 | orchestrator | Monday 05 January 2026 00:40:45 +0000 (0:00:00.221) 0:00:07.937 ******** 2026-01-05 00:40:52.475189 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-05 00:40:52.475201 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-05 00:40:52.475213 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-05 00:40:52.475226 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-05 00:40:52.475245 | orchestrator | 2026-01-05 00:40:52.475264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:52.475282 | orchestrator | Monday 05 January 2026 00:40:46 +0000 (0:00:01.149) 0:00:09.087 ******** 2026-01-05 00:40:52.475301 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.475320 | orchestrator | 2026-01-05 00:40:52.475341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:52.475360 | orchestrator | Monday 05 January 2026 00:40:46 +0000 (0:00:00.211) 0:00:09.298 ******** 2026-01-05 00:40:52.475378 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.475390 | orchestrator | 2026-01-05 00:40:52.475401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:52.475412 | orchestrator | Monday 05 January 2026 00:40:46 +0000 (0:00:00.222) 0:00:09.520 ******** 2026-01-05 00:40:52.475425 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.475444 | orchestrator | 2026-01-05 00:40:52.475462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:40:52.475483 | orchestrator | Monday 05 January 2026 00:40:46 +0000 (0:00:00.204) 0:00:09.725 ******** 2026-01-05 00:40:52.475501 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.475520 | orchestrator | 2026-01-05 00:40:52.475541 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 00:40:52.475562 | orchestrator | Monday 05 January 2026 00:40:47 +0000 (0:00:00.228) 0:00:09.953 ******** 2026-01-05 00:40:52.475609 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-05 00:40:52.475625 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-05 00:40:52.475638 | orchestrator | 2026-01-05 00:40:52.475651 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 00:40:52.475664 | orchestrator | Monday 05 January 2026 00:40:47 +0000 (0:00:00.174) 0:00:10.128 ******** 2026-01-05 00:40:52.475677 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.475689 | orchestrator | 2026-01-05 00:40:52.475702 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 00:40:52.475738 | orchestrator | Monday 05 January 2026 00:40:47 +0000 (0:00:00.142) 0:00:10.271 ******** 2026-01-05 00:40:52.475752 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.475766 | orchestrator | 2026-01-05 00:40:52.475778 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 00:40:52.475805 | orchestrator | Monday 05 January 2026 00:40:47 +0000 (0:00:00.149) 0:00:10.420 ******** 2026-01-05 00:40:52.475856 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.475870 | orchestrator | 2026-01-05 00:40:52.475883 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 00:40:52.475895 | orchestrator | Monday 05 January 2026 00:40:47 +0000 (0:00:00.135) 0:00:10.555 ******** 2026-01-05 00:40:52.475906 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:52.475917 | orchestrator | 2026-01-05 00:40:52.475930 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 00:40:52.475950 | orchestrator | Monday 05 January 2026 00:40:47 +0000 (0:00:00.149) 0:00:10.705 ******** 2026-01-05 00:40:52.475968 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6123202-7d2d-5b15-b15a-b013203adbfc'}}) 2026-01-05 00:40:52.475988 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'}}) 2026-01-05 00:40:52.476007 | orchestrator | 2026-01-05 00:40:52.476026 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 00:40:52.476044 | orchestrator | Monday 05 January 2026 00:40:47 +0000 (0:00:00.173) 0:00:10.878 ******** 2026-01-05 00:40:52.476063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6123202-7d2d-5b15-b15a-b013203adbfc'}})  2026-01-05 00:40:52.476083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'}})  2026-01-05 00:40:52.476094 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.476105 | orchestrator | 2026-01-05 00:40:52.476116 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 00:40:52.476126 | orchestrator | Monday 05 January 2026 00:40:48 +0000 (0:00:00.153) 0:00:11.032 ******** 2026-01-05 00:40:52.476137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6123202-7d2d-5b15-b15a-b013203adbfc'}})  2026-01-05 00:40:52.476148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'}})  2026-01-05 00:40:52.476159 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.476169 | orchestrator | 2026-01-05 00:40:52.476180 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 00:40:52.476191 | orchestrator | Monday 05 January 2026 00:40:48 +0000 (0:00:00.375) 0:00:11.407 ******** 2026-01-05 00:40:52.476202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6123202-7d2d-5b15-b15a-b013203adbfc'}})  2026-01-05 00:40:52.476234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'}})  2026-01-05 00:40:52.476246 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.476256 | orchestrator | 2026-01-05 00:40:52.476267 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 00:40:52.476278 | orchestrator | Monday 05 January 2026 00:40:48 +0000 (0:00:00.200) 0:00:11.608 ******** 2026-01-05 00:40:52.476289 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:52.476300 | orchestrator | 2026-01-05 00:40:52.476314 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 00:40:52.476332 | orchestrator | Monday 05 January 2026 00:40:48 +0000 (0:00:00.156) 0:00:11.765 ******** 2026-01-05 00:40:52.476350 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:52.476367 | orchestrator | 2026-01-05 00:40:52.476395 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 00:40:52.476415 | orchestrator | Monday 05 January 2026 00:40:49 +0000 (0:00:00.132) 0:00:11.897 ******** 2026-01-05 00:40:52.476434 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.476454 | orchestrator | 2026-01-05 00:40:52.476473 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 00:40:52.476490 | orchestrator | Monday 05 January 2026 00:40:49 +0000 (0:00:00.144) 0:00:12.042 ******** 2026-01-05 00:40:52.476521 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.476539 | orchestrator | 2026-01-05 00:40:52.476557 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 00:40:52.476576 | orchestrator | Monday 05 January 2026 00:40:49 +0000 (0:00:00.141) 0:00:12.183 ******** 2026-01-05 00:40:52.476622 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.476633 | orchestrator | 2026-01-05 00:40:52.476644 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 00:40:52.476661 | orchestrator | Monday 05 January 2026 00:40:49 +0000 (0:00:00.154) 0:00:12.338 ******** 2026-01-05 00:40:52.476680 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:40:52.476698 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:40:52.476716 | orchestrator |  "sdb": { 2026-01-05 00:40:52.476734 | orchestrator |  "osd_lvm_uuid": "f6123202-7d2d-5b15-b15a-b013203adbfc" 2026-01-05 00:40:52.476745 | orchestrator |  }, 2026-01-05 00:40:52.476756 | orchestrator |  "sdc": { 2026-01-05 00:40:52.476767 | orchestrator |  "osd_lvm_uuid": "6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21" 2026-01-05 00:40:52.476778 | orchestrator |  } 2026-01-05 00:40:52.476788 | orchestrator |  } 2026-01-05 00:40:52.476799 | orchestrator | } 2026-01-05 00:40:52.476810 | orchestrator | 2026-01-05 00:40:52.476821 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 00:40:52.476832 | orchestrator | Monday 05 January 2026 00:40:49 +0000 (0:00:00.145) 0:00:12.484 ******** 2026-01-05 00:40:52.476843 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.476853 | orchestrator | 2026-01-05 00:40:52.476864 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 00:40:52.476881 | orchestrator | Monday 05 January 2026 00:40:49 +0000 (0:00:00.174) 0:00:12.659 ******** 2026-01-05 00:40:52.476899 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.476917 | orchestrator | 2026-01-05 00:40:52.476936 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 00:40:52.476955 | orchestrator | Monday 05 January 2026 00:40:49 +0000 (0:00:00.134) 0:00:12.794 ******** 2026-01-05 00:40:52.476974 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:52.476992 | orchestrator | 2026-01-05 00:40:52.477007 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 00:40:52.477018 | orchestrator | Monday 05 January 2026 00:40:50 +0000 (0:00:00.141) 0:00:12.935 ******** 2026-01-05 00:40:52.477029 | orchestrator | changed: [testbed-node-3] => { 2026-01-05 00:40:52.477040 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 00:40:52.477050 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:40:52.477061 | orchestrator |  "sdb": { 2026-01-05 00:40:52.477074 | orchestrator |  "osd_lvm_uuid": "f6123202-7d2d-5b15-b15a-b013203adbfc" 2026-01-05 00:40:52.477093 | orchestrator |  }, 2026-01-05 00:40:52.477111 | orchestrator |  "sdc": { 2026-01-05 00:40:52.477129 | orchestrator |  "osd_lvm_uuid": "6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21" 2026-01-05 00:40:52.477147 | orchestrator |  } 2026-01-05 00:40:52.477167 | orchestrator |  }, 2026-01-05 00:40:52.477187 | orchestrator |  "lvm_volumes": [ 2026-01-05 00:40:52.477206 | orchestrator |  { 2026-01-05 00:40:52.477224 | orchestrator |  "data": "osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc", 2026-01-05 00:40:52.477237 | orchestrator |  "data_vg": "ceph-f6123202-7d2d-5b15-b15a-b013203adbfc" 2026-01-05 00:40:52.477247 | orchestrator |  }, 2026-01-05 00:40:52.477258 | orchestrator |  { 2026-01-05 00:40:52.477269 | orchestrator |  "data": "osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21", 2026-01-05 00:40:52.477279 | orchestrator |  "data_vg": "ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21" 2026-01-05 00:40:52.477290 | orchestrator |  } 2026-01-05 00:40:52.477365 | orchestrator |  ] 2026-01-05 00:40:52.477379 | orchestrator |  } 2026-01-05 00:40:52.477390 | orchestrator | } 2026-01-05 00:40:52.477418 | orchestrator | 2026-01-05 00:40:52.477429 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 00:40:52.477440 | orchestrator | Monday 05 January 2026 00:40:50 +0000 (0:00:00.336) 0:00:13.272 ******** 2026-01-05 00:40:52.477451 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 00:40:52.477462 | orchestrator | 2026-01-05 00:40:52.477482 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 00:40:52.477494 | orchestrator | 2026-01-05 00:40:52.477505 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:40:52.477515 | orchestrator | Monday 05 January 2026 00:40:51 +0000 (0:00:01.573) 0:00:14.845 ******** 2026-01-05 00:40:52.477526 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 00:40:52.477537 | orchestrator | 2026-01-05 00:40:52.477549 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:40:52.477560 | orchestrator | Monday 05 January 2026 00:40:52 +0000 (0:00:00.270) 0:00:15.116 ******** 2026-01-05 00:40:52.477570 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:52.477650 | orchestrator | 2026-01-05 00:40:52.477691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736192 | orchestrator | Monday 05 January 2026 00:40:52 +0000 (0:00:00.226) 0:00:15.343 ******** 2026-01-05 00:41:00.736312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:41:00.736324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:41:00.736331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:41:00.736339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:41:00.736365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:41:00.736380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:41:00.736387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:41:00.736394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:41:00.736401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-05 00:41:00.736408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:41:00.736414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:41:00.736421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:41:00.736432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:41:00.736439 | orchestrator | 2026-01-05 00:41:00.736448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736455 | orchestrator | Monday 05 January 2026 00:40:52 +0000 (0:00:00.357) 0:00:15.701 ******** 2026-01-05 00:41:00.736461 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.736469 | orchestrator | 2026-01-05 00:41:00.736475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736481 | orchestrator | Monday 05 January 2026 00:40:53 +0000 (0:00:00.180) 0:00:15.881 ******** 2026-01-05 00:41:00.736487 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.736494 | orchestrator | 2026-01-05 00:41:00.736500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736507 | orchestrator | Monday 05 January 2026 00:40:53 +0000 (0:00:00.174) 0:00:16.056 ******** 2026-01-05 00:41:00.736513 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.736520 | orchestrator | 2026-01-05 00:41:00.736527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736534 | orchestrator | Monday 05 January 2026 00:40:53 +0000 (0:00:00.165) 0:00:16.221 ******** 2026-01-05 00:41:00.736621 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.736631 | orchestrator | 2026-01-05 00:41:00.736638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736646 | orchestrator | Monday 05 January 2026 00:40:53 +0000 (0:00:00.158) 0:00:16.379 ******** 2026-01-05 00:41:00.736654 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.736660 | orchestrator | 2026-01-05 00:41:00.736668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736676 | orchestrator | Monday 05 January 2026 00:40:53 +0000 (0:00:00.453) 0:00:16.833 ******** 2026-01-05 00:41:00.736682 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.736691 | orchestrator | 2026-01-05 00:41:00.736698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736705 | orchestrator | Monday 05 January 2026 00:40:54 +0000 (0:00:00.181) 0:00:17.014 ******** 2026-01-05 00:41:00.736714 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.736722 | orchestrator | 2026-01-05 00:41:00.736729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736738 | orchestrator | Monday 05 January 2026 00:40:54 +0000 (0:00:00.173) 0:00:17.188 ******** 2026-01-05 00:41:00.736746 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.736753 | orchestrator | 2026-01-05 00:41:00.736779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736787 | orchestrator | Monday 05 January 2026 00:40:54 +0000 (0:00:00.175) 0:00:17.364 ******** 2026-01-05 00:41:00.736795 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb) 2026-01-05 00:41:00.736803 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb) 2026-01-05 00:41:00.736813 | orchestrator | 2026-01-05 00:41:00.736821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736828 | orchestrator | Monday 05 January 2026 00:40:54 +0000 (0:00:00.418) 0:00:17.782 ******** 2026-01-05 00:41:00.736836 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9) 2026-01-05 00:41:00.736845 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9) 2026-01-05 00:41:00.736852 | orchestrator | 2026-01-05 00:41:00.736859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736868 | orchestrator | Monday 05 January 2026 00:40:55 +0000 (0:00:00.615) 0:00:18.397 ******** 2026-01-05 00:41:00.736876 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff) 2026-01-05 00:41:00.736882 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff) 2026-01-05 00:41:00.736891 | orchestrator | 2026-01-05 00:41:00.736899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736927 | orchestrator | Monday 05 January 2026 00:40:56 +0000 (0:00:00.512) 0:00:18.910 ******** 2026-01-05 00:41:00.736935 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302) 2026-01-05 00:41:00.736942 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302) 2026-01-05 00:41:00.736949 | orchestrator | 2026-01-05 00:41:00.736959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:00.736966 | orchestrator | Monday 05 January 2026 00:40:56 +0000 (0:00:00.534) 0:00:19.444 ******** 2026-01-05 00:41:00.736973 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:41:00.736980 | orchestrator | 2026-01-05 00:41:00.736988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.736997 | orchestrator | Monday 05 January 2026 00:40:56 +0000 (0:00:00.417) 0:00:19.862 ******** 2026-01-05 00:41:00.737008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:41:00.737028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:41:00.737036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:41:00.737043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:41:00.737050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:41:00.737059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:41:00.737066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:41:00.737073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:41:00.737080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-05 00:41:00.737087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:41:00.737093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:41:00.737100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:41:00.737107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:41:00.737115 | orchestrator | 2026-01-05 00:41:00.737123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737130 | orchestrator | Monday 05 January 2026 00:40:57 +0000 (0:00:00.411) 0:00:20.274 ******** 2026-01-05 00:41:00.737138 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.737144 | orchestrator | 2026-01-05 00:41:00.737151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737161 | orchestrator | Monday 05 January 2026 00:40:58 +0000 (0:00:00.738) 0:00:21.012 ******** 2026-01-05 00:41:00.737167 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.737174 | orchestrator | 2026-01-05 00:41:00.737182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737189 | orchestrator | Monday 05 January 2026 00:40:58 +0000 (0:00:00.292) 0:00:21.305 ******** 2026-01-05 00:41:00.737195 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.737205 | orchestrator | 2026-01-05 00:41:00.737212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737219 | orchestrator | Monday 05 January 2026 00:40:58 +0000 (0:00:00.350) 0:00:21.655 ******** 2026-01-05 00:41:00.737236 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.737243 | orchestrator | 2026-01-05 00:41:00.737251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737258 | orchestrator | Monday 05 January 2026 00:40:59 +0000 (0:00:00.255) 0:00:21.911 ******** 2026-01-05 00:41:00.737266 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.737273 | orchestrator | 2026-01-05 00:41:00.737280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737287 | orchestrator | Monday 05 January 2026 00:40:59 +0000 (0:00:00.189) 0:00:22.100 ******** 2026-01-05 00:41:00.737294 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.737302 | orchestrator | 2026-01-05 00:41:00.737309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737316 | orchestrator | Monday 05 January 2026 00:40:59 +0000 (0:00:00.141) 0:00:22.241 ******** 2026-01-05 00:41:00.737324 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.737331 | orchestrator | 2026-01-05 00:41:00.737339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737346 | orchestrator | Monday 05 January 2026 00:40:59 +0000 (0:00:00.182) 0:00:22.423 ******** 2026-01-05 00:41:00.737353 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:00.737367 | orchestrator | 2026-01-05 00:41:00.737375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737382 | orchestrator | Monday 05 January 2026 00:40:59 +0000 (0:00:00.174) 0:00:22.598 ******** 2026-01-05 00:41:00.737390 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-05 00:41:00.737399 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-05 00:41:00.737408 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-05 00:41:00.737415 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-05 00:41:00.737423 | orchestrator | 2026-01-05 00:41:00.737430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:00.737438 | orchestrator | Monday 05 January 2026 00:41:00 +0000 (0:00:00.807) 0:00:23.406 ******** 2026-01-05 00:41:00.737445 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.097965 | orchestrator | 2026-01-05 00:41:07.098164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:07.098183 | orchestrator | Monday 05 January 2026 00:41:00 +0000 (0:00:00.205) 0:00:23.611 ******** 2026-01-05 00:41:07.098196 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.098208 | orchestrator | 2026-01-05 00:41:07.098219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:07.098231 | orchestrator | Monday 05 January 2026 00:41:00 +0000 (0:00:00.210) 0:00:23.822 ******** 2026-01-05 00:41:07.098248 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.098266 | orchestrator | 2026-01-05 00:41:07.098285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:07.098305 | orchestrator | Monday 05 January 2026 00:41:01 +0000 (0:00:00.192) 0:00:24.015 ******** 2026-01-05 00:41:07.098325 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.098342 | orchestrator | 2026-01-05 00:41:07.098361 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 00:41:07.098380 | orchestrator | Monday 05 January 2026 00:41:01 +0000 (0:00:00.560) 0:00:24.575 ******** 2026-01-05 00:41:07.098392 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-05 00:41:07.098403 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-05 00:41:07.098414 | orchestrator | 2026-01-05 00:41:07.098425 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 00:41:07.098436 | orchestrator | Monday 05 January 2026 00:41:01 +0000 (0:00:00.204) 0:00:24.779 ******** 2026-01-05 00:41:07.098447 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.098458 | orchestrator | 2026-01-05 00:41:07.098469 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 00:41:07.098480 | orchestrator | Monday 05 January 2026 00:41:02 +0000 (0:00:00.159) 0:00:24.939 ******** 2026-01-05 00:41:07.098491 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.098502 | orchestrator | 2026-01-05 00:41:07.098512 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 00:41:07.098523 | orchestrator | Monday 05 January 2026 00:41:02 +0000 (0:00:00.148) 0:00:25.087 ******** 2026-01-05 00:41:07.098534 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.098544 | orchestrator | 2026-01-05 00:41:07.098555 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 00:41:07.098566 | orchestrator | Monday 05 January 2026 00:41:02 +0000 (0:00:00.132) 0:00:25.220 ******** 2026-01-05 00:41:07.098603 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:07.098616 | orchestrator | 2026-01-05 00:41:07.098627 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 00:41:07.098755 | orchestrator | Monday 05 January 2026 00:41:02 +0000 (0:00:00.129) 0:00:25.349 ******** 2026-01-05 00:41:07.098770 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '846bb30c-958c-57a2-8682-0625433ec757'}}) 2026-01-05 00:41:07.098782 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be99b097-8f9c-5b18-b9e6-1dc57f49383d'}}) 2026-01-05 00:41:07.098819 | orchestrator | 2026-01-05 00:41:07.098830 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 00:41:07.098841 | orchestrator | Monday 05 January 2026 00:41:02 +0000 (0:00:00.189) 0:00:25.538 ******** 2026-01-05 00:41:07.098853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '846bb30c-958c-57a2-8682-0625433ec757'}})  2026-01-05 00:41:07.098866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be99b097-8f9c-5b18-b9e6-1dc57f49383d'}})  2026-01-05 00:41:07.098877 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.098887 | orchestrator | 2026-01-05 00:41:07.098898 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 00:41:07.098909 | orchestrator | Monday 05 January 2026 00:41:02 +0000 (0:00:00.123) 0:00:25.662 ******** 2026-01-05 00:41:07.098948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '846bb30c-958c-57a2-8682-0625433ec757'}})  2026-01-05 00:41:07.098982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be99b097-8f9c-5b18-b9e6-1dc57f49383d'}})  2026-01-05 00:41:07.098994 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.099005 | orchestrator | 2026-01-05 00:41:07.099015 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 00:41:07.099026 | orchestrator | Monday 05 January 2026 00:41:02 +0000 (0:00:00.130) 0:00:25.793 ******** 2026-01-05 00:41:07.099037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '846bb30c-958c-57a2-8682-0625433ec757'}})  2026-01-05 00:41:07.099049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be99b097-8f9c-5b18-b9e6-1dc57f49383d'}})  2026-01-05 00:41:07.099060 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.099070 | orchestrator | 2026-01-05 00:41:07.099081 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 00:41:07.099092 | orchestrator | Monday 05 January 2026 00:41:03 +0000 (0:00:00.124) 0:00:25.917 ******** 2026-01-05 00:41:07.099103 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:07.099113 | orchestrator | 2026-01-05 00:41:07.099124 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 00:41:07.099135 | orchestrator | Monday 05 January 2026 00:41:03 +0000 (0:00:00.111) 0:00:26.028 ******** 2026-01-05 00:41:07.099145 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:07.099156 | orchestrator | 2026-01-05 00:41:07.099167 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 00:41:07.099178 | orchestrator | Monday 05 January 2026 00:41:03 +0000 (0:00:00.110) 0:00:26.139 ******** 2026-01-05 00:41:07.099208 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.099220 | orchestrator | 2026-01-05 00:41:07.099231 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 00:41:07.099242 | orchestrator | Monday 05 January 2026 00:41:03 +0000 (0:00:00.294) 0:00:26.434 ******** 2026-01-05 00:41:07.099252 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.099263 | orchestrator | 2026-01-05 00:41:07.099274 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 00:41:07.099285 | orchestrator | Monday 05 January 2026 00:41:03 +0000 (0:00:00.138) 0:00:26.572 ******** 2026-01-05 00:41:07.099296 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.099306 | orchestrator | 2026-01-05 00:41:07.099317 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 00:41:07.099328 | orchestrator | Monday 05 January 2026 00:41:03 +0000 (0:00:00.123) 0:00:26.695 ******** 2026-01-05 00:41:07.099339 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:41:07.099349 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:41:07.099360 | orchestrator |  "sdb": { 2026-01-05 00:41:07.099372 | orchestrator |  "osd_lvm_uuid": "846bb30c-958c-57a2-8682-0625433ec757" 2026-01-05 00:41:07.099383 | orchestrator |  }, 2026-01-05 00:41:07.099403 | orchestrator |  "sdc": { 2026-01-05 00:41:07.099414 | orchestrator |  "osd_lvm_uuid": "be99b097-8f9c-5b18-b9e6-1dc57f49383d" 2026-01-05 00:41:07.099425 | orchestrator |  } 2026-01-05 00:41:07.099435 | orchestrator |  } 2026-01-05 00:41:07.099446 | orchestrator | } 2026-01-05 00:41:07.099458 | orchestrator | 2026-01-05 00:41:07.099469 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 00:41:07.099479 | orchestrator | Monday 05 January 2026 00:41:03 +0000 (0:00:00.137) 0:00:26.833 ******** 2026-01-05 00:41:07.099490 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.099501 | orchestrator | 2026-01-05 00:41:07.099511 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 00:41:07.099522 | orchestrator | Monday 05 January 2026 00:41:04 +0000 (0:00:00.132) 0:00:26.965 ******** 2026-01-05 00:41:07.099533 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.099543 | orchestrator | 2026-01-05 00:41:07.099554 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 00:41:07.099564 | orchestrator | Monday 05 January 2026 00:41:04 +0000 (0:00:00.131) 0:00:27.097 ******** 2026-01-05 00:41:07.099602 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:07.099614 | orchestrator | 2026-01-05 00:41:07.099624 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 00:41:07.099635 | orchestrator | Monday 05 January 2026 00:41:04 +0000 (0:00:00.112) 0:00:27.210 ******** 2026-01-05 00:41:07.099646 | orchestrator | changed: [testbed-node-4] => { 2026-01-05 00:41:07.099656 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 00:41:07.099667 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:41:07.099678 | orchestrator |  "sdb": { 2026-01-05 00:41:07.099689 | orchestrator |  "osd_lvm_uuid": "846bb30c-958c-57a2-8682-0625433ec757" 2026-01-05 00:41:07.099700 | orchestrator |  }, 2026-01-05 00:41:07.099710 | orchestrator |  "sdc": { 2026-01-05 00:41:07.099721 | orchestrator |  "osd_lvm_uuid": "be99b097-8f9c-5b18-b9e6-1dc57f49383d" 2026-01-05 00:41:07.099732 | orchestrator |  } 2026-01-05 00:41:07.099743 | orchestrator |  }, 2026-01-05 00:41:07.099753 | orchestrator |  "lvm_volumes": [ 2026-01-05 00:41:07.099764 | orchestrator |  { 2026-01-05 00:41:07.099775 | orchestrator |  "data": "osd-block-846bb30c-958c-57a2-8682-0625433ec757", 2026-01-05 00:41:07.099786 | orchestrator |  "data_vg": "ceph-846bb30c-958c-57a2-8682-0625433ec757" 2026-01-05 00:41:07.099796 | orchestrator |  }, 2026-01-05 00:41:07.099807 | orchestrator |  { 2026-01-05 00:41:07.099818 | orchestrator |  "data": "osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d", 2026-01-05 00:41:07.099828 | orchestrator |  "data_vg": "ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d" 2026-01-05 00:41:07.099839 | orchestrator |  } 2026-01-05 00:41:07.099850 | orchestrator |  ] 2026-01-05 00:41:07.099860 | orchestrator |  } 2026-01-05 00:41:07.099871 | orchestrator | } 2026-01-05 00:41:07.099882 | orchestrator | 2026-01-05 00:41:07.099893 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 00:41:07.099904 | orchestrator | Monday 05 January 2026 00:41:04 +0000 (0:00:00.201) 0:00:27.411 ******** 2026-01-05 00:41:07.099914 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 00:41:07.099925 | orchestrator | 2026-01-05 00:41:07.099936 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 00:41:07.099946 | orchestrator | 2026-01-05 00:41:07.099957 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:41:07.099967 | orchestrator | Monday 05 January 2026 00:41:05 +0000 (0:00:01.134) 0:00:28.545 ******** 2026-01-05 00:41:07.099978 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 00:41:07.099988 | orchestrator | 2026-01-05 00:41:07.099999 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:41:07.100017 | orchestrator | Monday 05 January 2026 00:41:06 +0000 (0:00:00.700) 0:00:29.245 ******** 2026-01-05 00:41:07.100029 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:07.100039 | orchestrator | 2026-01-05 00:41:07.100050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:07.100061 | orchestrator | Monday 05 January 2026 00:41:06 +0000 (0:00:00.257) 0:00:29.503 ******** 2026-01-05 00:41:07.100071 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:41:07.100082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:41:07.100099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:41:07.100110 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:41:07.100121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:41:07.100138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:41:15.674208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:41:15.674337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:41:15.674353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-05 00:41:15.674366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:41:15.674377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:41:15.674389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:41:15.674400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:41:15.674411 | orchestrator | 2026-01-05 00:41:15.674424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674437 | orchestrator | Monday 05 January 2026 00:41:07 +0000 (0:00:00.448) 0:00:29.951 ******** 2026-01-05 00:41:15.674449 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.674460 | orchestrator | 2026-01-05 00:41:15.674472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674483 | orchestrator | Monday 05 January 2026 00:41:07 +0000 (0:00:00.260) 0:00:30.212 ******** 2026-01-05 00:41:15.674494 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.674505 | orchestrator | 2026-01-05 00:41:15.674517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674528 | orchestrator | Monday 05 January 2026 00:41:07 +0000 (0:00:00.219) 0:00:30.431 ******** 2026-01-05 00:41:15.674539 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.674549 | orchestrator | 2026-01-05 00:41:15.674561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674597 | orchestrator | Monday 05 January 2026 00:41:07 +0000 (0:00:00.236) 0:00:30.668 ******** 2026-01-05 00:41:15.674609 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.674620 | orchestrator | 2026-01-05 00:41:15.674630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674641 | orchestrator | Monday 05 January 2026 00:41:08 +0000 (0:00:00.213) 0:00:30.882 ******** 2026-01-05 00:41:15.674652 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.674663 | orchestrator | 2026-01-05 00:41:15.674674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674684 | orchestrator | Monday 05 January 2026 00:41:08 +0000 (0:00:00.219) 0:00:31.101 ******** 2026-01-05 00:41:15.674695 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.674706 | orchestrator | 2026-01-05 00:41:15.674717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674728 | orchestrator | Monday 05 January 2026 00:41:08 +0000 (0:00:00.288) 0:00:31.390 ******** 2026-01-05 00:41:15.674767 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.674779 | orchestrator | 2026-01-05 00:41:15.674790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674801 | orchestrator | Monday 05 January 2026 00:41:08 +0000 (0:00:00.194) 0:00:31.584 ******** 2026-01-05 00:41:15.674812 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.674823 | orchestrator | 2026-01-05 00:41:15.674833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674845 | orchestrator | Monday 05 January 2026 00:41:08 +0000 (0:00:00.226) 0:00:31.810 ******** 2026-01-05 00:41:15.674856 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078) 2026-01-05 00:41:15.674868 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078) 2026-01-05 00:41:15.674879 | orchestrator | 2026-01-05 00:41:15.674890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674901 | orchestrator | Monday 05 January 2026 00:41:09 +0000 (0:00:00.870) 0:00:32.681 ******** 2026-01-05 00:41:15.674912 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52) 2026-01-05 00:41:15.674923 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52) 2026-01-05 00:41:15.674934 | orchestrator | 2026-01-05 00:41:15.674945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.674956 | orchestrator | Monday 05 January 2026 00:41:10 +0000 (0:00:00.411) 0:00:33.093 ******** 2026-01-05 00:41:15.674967 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3) 2026-01-05 00:41:15.674977 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3) 2026-01-05 00:41:15.674988 | orchestrator | 2026-01-05 00:41:15.674999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.675010 | orchestrator | Monday 05 January 2026 00:41:10 +0000 (0:00:00.358) 0:00:33.451 ******** 2026-01-05 00:41:15.675021 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c) 2026-01-05 00:41:15.675031 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c) 2026-01-05 00:41:15.675042 | orchestrator | 2026-01-05 00:41:15.675053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:41:15.675064 | orchestrator | Monday 05 January 2026 00:41:10 +0000 (0:00:00.398) 0:00:33.849 ******** 2026-01-05 00:41:15.675075 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:41:15.675085 | orchestrator | 2026-01-05 00:41:15.675096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675128 | orchestrator | Monday 05 January 2026 00:41:11 +0000 (0:00:00.649) 0:00:34.499 ******** 2026-01-05 00:41:15.675140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:41:15.675151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:41:15.675161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:41:15.675172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:41:15.675183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:41:15.675193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:41:15.675204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:41:15.675215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:41:15.675234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-05 00:41:15.675245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:41:15.675256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:41:15.675286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:41:15.675297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:41:15.675308 | orchestrator | 2026-01-05 00:41:15.675320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675331 | orchestrator | Monday 05 January 2026 00:41:11 +0000 (0:00:00.372) 0:00:34.871 ******** 2026-01-05 00:41:15.675341 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675352 | orchestrator | 2026-01-05 00:41:15.675363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675373 | orchestrator | Monday 05 January 2026 00:41:12 +0000 (0:00:00.179) 0:00:35.051 ******** 2026-01-05 00:41:15.675384 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675394 | orchestrator | 2026-01-05 00:41:15.675405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675416 | orchestrator | Monday 05 January 2026 00:41:12 +0000 (0:00:00.175) 0:00:35.227 ******** 2026-01-05 00:41:15.675432 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675443 | orchestrator | 2026-01-05 00:41:15.675454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675465 | orchestrator | Monday 05 January 2026 00:41:12 +0000 (0:00:00.166) 0:00:35.393 ******** 2026-01-05 00:41:15.675475 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675486 | orchestrator | 2026-01-05 00:41:15.675497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675508 | orchestrator | Monday 05 January 2026 00:41:12 +0000 (0:00:00.185) 0:00:35.578 ******** 2026-01-05 00:41:15.675518 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675529 | orchestrator | 2026-01-05 00:41:15.675540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675550 | orchestrator | Monday 05 January 2026 00:41:12 +0000 (0:00:00.165) 0:00:35.744 ******** 2026-01-05 00:41:15.675561 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675599 | orchestrator | 2026-01-05 00:41:15.675610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675621 | orchestrator | Monday 05 January 2026 00:41:13 +0000 (0:00:00.597) 0:00:36.342 ******** 2026-01-05 00:41:15.675632 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675643 | orchestrator | 2026-01-05 00:41:15.675653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675664 | orchestrator | Monday 05 January 2026 00:41:13 +0000 (0:00:00.240) 0:00:36.583 ******** 2026-01-05 00:41:15.675675 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675686 | orchestrator | 2026-01-05 00:41:15.675697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675707 | orchestrator | Monday 05 January 2026 00:41:13 +0000 (0:00:00.295) 0:00:36.878 ******** 2026-01-05 00:41:15.675718 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-05 00:41:15.675729 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-05 00:41:15.675740 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-05 00:41:15.675750 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-05 00:41:15.675761 | orchestrator | 2026-01-05 00:41:15.675772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675783 | orchestrator | Monday 05 January 2026 00:41:14 +0000 (0:00:00.766) 0:00:37.644 ******** 2026-01-05 00:41:15.675794 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675804 | orchestrator | 2026-01-05 00:41:15.675823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675834 | orchestrator | Monday 05 January 2026 00:41:14 +0000 (0:00:00.214) 0:00:37.859 ******** 2026-01-05 00:41:15.675845 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675856 | orchestrator | 2026-01-05 00:41:15.675867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675878 | orchestrator | Monday 05 January 2026 00:41:15 +0000 (0:00:00.220) 0:00:38.079 ******** 2026-01-05 00:41:15.675889 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675899 | orchestrator | 2026-01-05 00:41:15.675910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:41:15.675921 | orchestrator | Monday 05 January 2026 00:41:15 +0000 (0:00:00.221) 0:00:38.301 ******** 2026-01-05 00:41:15.675932 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:15.675942 | orchestrator | 2026-01-05 00:41:15.675961 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 00:41:20.318232 | orchestrator | Monday 05 January 2026 00:41:15 +0000 (0:00:00.243) 0:00:38.544 ******** 2026-01-05 00:41:20.318336 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-05 00:41:20.318348 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-05 00:41:20.318356 | orchestrator | 2026-01-05 00:41:20.318365 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 00:41:20.318374 | orchestrator | Monday 05 January 2026 00:41:15 +0000 (0:00:00.219) 0:00:38.764 ******** 2026-01-05 00:41:20.318382 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.318389 | orchestrator | 2026-01-05 00:41:20.318397 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 00:41:20.318404 | orchestrator | Monday 05 January 2026 00:41:16 +0000 (0:00:00.153) 0:00:38.917 ******** 2026-01-05 00:41:20.318411 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.318419 | orchestrator | 2026-01-05 00:41:20.318426 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 00:41:20.318433 | orchestrator | Monday 05 January 2026 00:41:16 +0000 (0:00:00.156) 0:00:39.074 ******** 2026-01-05 00:41:20.318440 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.318447 | orchestrator | 2026-01-05 00:41:20.318454 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 00:41:20.318462 | orchestrator | Monday 05 January 2026 00:41:16 +0000 (0:00:00.486) 0:00:39.561 ******** 2026-01-05 00:41:20.318469 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:20.318477 | orchestrator | 2026-01-05 00:41:20.318484 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 00:41:20.318492 | orchestrator | Monday 05 January 2026 00:41:16 +0000 (0:00:00.149) 0:00:39.710 ******** 2026-01-05 00:41:20.318500 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c427200-cd92-5345-a12e-93ab1a68a0a9'}}) 2026-01-05 00:41:20.318508 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0a3b48c-8251-5295-95c4-04cb80bcb769'}}) 2026-01-05 00:41:20.318516 | orchestrator | 2026-01-05 00:41:20.318523 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 00:41:20.318530 | orchestrator | Monday 05 January 2026 00:41:17 +0000 (0:00:00.178) 0:00:39.889 ******** 2026-01-05 00:41:20.318539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c427200-cd92-5345-a12e-93ab1a68a0a9'}})  2026-01-05 00:41:20.318548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0a3b48c-8251-5295-95c4-04cb80bcb769'}})  2026-01-05 00:41:20.318555 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.318562 | orchestrator | 2026-01-05 00:41:20.318598 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 00:41:20.318606 | orchestrator | Monday 05 January 2026 00:41:17 +0000 (0:00:00.155) 0:00:40.045 ******** 2026-01-05 00:41:20.318614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c427200-cd92-5345-a12e-93ab1a68a0a9'}})  2026-01-05 00:41:20.318643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0a3b48c-8251-5295-95c4-04cb80bcb769'}})  2026-01-05 00:41:20.318651 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.318658 | orchestrator | 2026-01-05 00:41:20.318666 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 00:41:20.318675 | orchestrator | Monday 05 January 2026 00:41:17 +0000 (0:00:00.146) 0:00:40.191 ******** 2026-01-05 00:41:20.318684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c427200-cd92-5345-a12e-93ab1a68a0a9'}})  2026-01-05 00:41:20.318692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0a3b48c-8251-5295-95c4-04cb80bcb769'}})  2026-01-05 00:41:20.318701 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.318709 | orchestrator | 2026-01-05 00:41:20.318717 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 00:41:20.318725 | orchestrator | Monday 05 January 2026 00:41:17 +0000 (0:00:00.148) 0:00:40.339 ******** 2026-01-05 00:41:20.318734 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:20.318742 | orchestrator | 2026-01-05 00:41:20.318750 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 00:41:20.318759 | orchestrator | Monday 05 January 2026 00:41:17 +0000 (0:00:00.158) 0:00:40.498 ******** 2026-01-05 00:41:20.318767 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:20.318775 | orchestrator | 2026-01-05 00:41:20.318800 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 00:41:20.318810 | orchestrator | Monday 05 January 2026 00:41:17 +0000 (0:00:00.154) 0:00:40.653 ******** 2026-01-05 00:41:20.318818 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.318826 | orchestrator | 2026-01-05 00:41:20.318834 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 00:41:20.318842 | orchestrator | Monday 05 January 2026 00:41:17 +0000 (0:00:00.146) 0:00:40.799 ******** 2026-01-05 00:41:20.318850 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.318858 | orchestrator | 2026-01-05 00:41:20.318867 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 00:41:20.318875 | orchestrator | Monday 05 January 2026 00:41:18 +0000 (0:00:00.167) 0:00:40.967 ******** 2026-01-05 00:41:20.318883 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.318892 | orchestrator | 2026-01-05 00:41:20.318900 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 00:41:20.318908 | orchestrator | Monday 05 January 2026 00:41:18 +0000 (0:00:00.185) 0:00:41.152 ******** 2026-01-05 00:41:20.318917 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:41:20.318925 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:41:20.318934 | orchestrator |  "sdb": { 2026-01-05 00:41:20.318957 | orchestrator |  "osd_lvm_uuid": "8c427200-cd92-5345-a12e-93ab1a68a0a9" 2026-01-05 00:41:20.318966 | orchestrator |  }, 2026-01-05 00:41:20.318974 | orchestrator |  "sdc": { 2026-01-05 00:41:20.318981 | orchestrator |  "osd_lvm_uuid": "f0a3b48c-8251-5295-95c4-04cb80bcb769" 2026-01-05 00:41:20.318988 | orchestrator |  } 2026-01-05 00:41:20.318996 | orchestrator |  } 2026-01-05 00:41:20.319004 | orchestrator | } 2026-01-05 00:41:20.319012 | orchestrator | 2026-01-05 00:41:20.319019 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 00:41:20.319026 | orchestrator | Monday 05 January 2026 00:41:18 +0000 (0:00:00.161) 0:00:41.314 ******** 2026-01-05 00:41:20.319034 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.319041 | orchestrator | 2026-01-05 00:41:20.319048 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 00:41:20.319055 | orchestrator | Monday 05 January 2026 00:41:18 +0000 (0:00:00.372) 0:00:41.686 ******** 2026-01-05 00:41:20.319062 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.319077 | orchestrator | 2026-01-05 00:41:20.319085 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 00:41:20.319092 | orchestrator | Monday 05 January 2026 00:41:18 +0000 (0:00:00.147) 0:00:41.833 ******** 2026-01-05 00:41:20.319099 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:20.319106 | orchestrator | 2026-01-05 00:41:20.319114 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 00:41:20.319121 | orchestrator | Monday 05 January 2026 00:41:19 +0000 (0:00:00.136) 0:00:41.970 ******** 2026-01-05 00:41:20.319128 | orchestrator | changed: [testbed-node-5] => { 2026-01-05 00:41:20.319135 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 00:41:20.319142 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:41:20.319149 | orchestrator |  "sdb": { 2026-01-05 00:41:20.319157 | orchestrator |  "osd_lvm_uuid": "8c427200-cd92-5345-a12e-93ab1a68a0a9" 2026-01-05 00:41:20.319164 | orchestrator |  }, 2026-01-05 00:41:20.319171 | orchestrator |  "sdc": { 2026-01-05 00:41:20.319178 | orchestrator |  "osd_lvm_uuid": "f0a3b48c-8251-5295-95c4-04cb80bcb769" 2026-01-05 00:41:20.319186 | orchestrator |  } 2026-01-05 00:41:20.319193 | orchestrator |  }, 2026-01-05 00:41:20.319200 | orchestrator |  "lvm_volumes": [ 2026-01-05 00:41:20.319207 | orchestrator |  { 2026-01-05 00:41:20.319215 | orchestrator |  "data": "osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9", 2026-01-05 00:41:20.319222 | orchestrator |  "data_vg": "ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9" 2026-01-05 00:41:20.319229 | orchestrator |  }, 2026-01-05 00:41:20.319236 | orchestrator |  { 2026-01-05 00:41:20.319243 | orchestrator |  "data": "osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769", 2026-01-05 00:41:20.319255 | orchestrator |  "data_vg": "ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769" 2026-01-05 00:41:20.319263 | orchestrator |  } 2026-01-05 00:41:20.319270 | orchestrator |  ] 2026-01-05 00:41:20.319282 | orchestrator |  } 2026-01-05 00:41:20.319290 | orchestrator | } 2026-01-05 00:41:20.319297 | orchestrator | 2026-01-05 00:41:20.319304 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 00:41:20.319312 | orchestrator | Monday 05 January 2026 00:41:19 +0000 (0:00:00.214) 0:00:42.185 ******** 2026-01-05 00:41:20.319319 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 00:41:20.319326 | orchestrator | 2026-01-05 00:41:20.319333 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:41:20.319340 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 00:41:20.319350 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 00:41:20.319357 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 00:41:20.319365 | orchestrator | 2026-01-05 00:41:20.319372 | orchestrator | 2026-01-05 00:41:20.319379 | orchestrator | 2026-01-05 00:41:20.319386 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:41:20.319394 | orchestrator | Monday 05 January 2026 00:41:20 +0000 (0:00:00.984) 0:00:43.169 ******** 2026-01-05 00:41:20.319401 | orchestrator | =============================================================================== 2026-01-05 00:41:20.319408 | orchestrator | Write configuration file ------------------------------------------------ 3.69s 2026-01-05 00:41:20.319415 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2026-01-05 00:41:20.319423 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.20s 2026-01-05 00:41:20.319430 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2026-01-05 00:41:20.319442 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2026-01-05 00:41:20.319449 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-01-05 00:41:20.319456 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-01-05 00:41:20.319463 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2026-01-05 00:41:20.319471 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-01-05 00:41:20.319478 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.75s 2026-01-05 00:41:20.319485 | orchestrator | Print configuration data ------------------------------------------------ 0.75s 2026-01-05 00:41:20.319492 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-01-05 00:41:20.319500 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2026-01-05 00:41:20.319511 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-01-05 00:41:20.677816 | orchestrator | Print WAL devices ------------------------------------------------------- 0.68s 2026-01-05 00:41:20.677929 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.65s 2026-01-05 00:41:20.677943 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-01-05 00:41:20.677955 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-01-05 00:41:20.677966 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-01-05 00:41:20.677977 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.60s 2026-01-05 00:41:43.610172 | orchestrator | 2026-01-05 00:41:43 | INFO  | Task 776a79f2-afb6-4010-9fd3-d9ec71a22f92 (sync inventory) is running in background. Output coming soon. 2026-01-05 00:42:11.175146 | orchestrator | 2026-01-05 00:41:45 | INFO  | Starting group_vars file reorganization 2026-01-05 00:42:11.175287 | orchestrator | 2026-01-05 00:41:45 | INFO  | Moved 0 file(s) to their respective directories 2026-01-05 00:42:11.175315 | orchestrator | 2026-01-05 00:41:45 | INFO  | Group_vars file reorganization completed 2026-01-05 00:42:11.175336 | orchestrator | 2026-01-05 00:41:48 | INFO  | Starting variable preparation from inventory 2026-01-05 00:42:11.175355 | orchestrator | 2026-01-05 00:41:51 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-05 00:42:11.175375 | orchestrator | 2026-01-05 00:41:51 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-05 00:42:11.175393 | orchestrator | 2026-01-05 00:41:51 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-05 00:42:11.175413 | orchestrator | 2026-01-05 00:41:51 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-05 00:42:11.175432 | orchestrator | 2026-01-05 00:41:51 | INFO  | Variable preparation completed 2026-01-05 00:42:11.175452 | orchestrator | 2026-01-05 00:41:53 | INFO  | Starting inventory overwrite handling 2026-01-05 00:42:11.175464 | orchestrator | 2026-01-05 00:41:53 | INFO  | Handling group overwrites in 99-overwrite 2026-01-05 00:42:11.175475 | orchestrator | 2026-01-05 00:41:53 | INFO  | Removing group frr:children from 60-generic 2026-01-05 00:42:11.175486 | orchestrator | 2026-01-05 00:41:53 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-05 00:42:11.175523 | orchestrator | 2026-01-05 00:41:53 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-05 00:42:11.175603 | orchestrator | 2026-01-05 00:41:53 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-05 00:42:11.175626 | orchestrator | 2026-01-05 00:41:53 | INFO  | Handling group overwrites in 20-roles 2026-01-05 00:42:11.175644 | orchestrator | 2026-01-05 00:41:53 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-05 00:42:11.175702 | orchestrator | 2026-01-05 00:41:53 | INFO  | Removed 5 group(s) in total 2026-01-05 00:42:11.175725 | orchestrator | 2026-01-05 00:41:53 | INFO  | Inventory overwrite handling completed 2026-01-05 00:42:11.175743 | orchestrator | 2026-01-05 00:41:54 | INFO  | Starting merge of inventory files 2026-01-05 00:42:11.175763 | orchestrator | 2026-01-05 00:41:54 | INFO  | Inventory files merged successfully 2026-01-05 00:42:11.175783 | orchestrator | 2026-01-05 00:41:58 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-05 00:42:11.175804 | orchestrator | 2026-01-05 00:42:10 | INFO  | Successfully wrote ClusterShell configuration 2026-01-05 00:42:11.175824 | orchestrator | [master 2a8f6a8] 2026-01-05-00-42 2026-01-05 00:42:11.175846 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-05 00:42:13.171705 | orchestrator | 2026-01-05 00:42:13 | INFO  | Task ecdacb6e-def4-46f2-aafd-d1cfcd685f6b (ceph-create-lvm-devices) was prepared for execution. 2026-01-05 00:42:13.171813 | orchestrator | 2026-01-05 00:42:13 | INFO  | It takes a moment until task ecdacb6e-def4-46f2-aafd-d1cfcd685f6b (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-05 00:42:25.021151 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:42:25.021279 | orchestrator | 2.16.14 2026-01-05 00:42:25.021295 | orchestrator | 2026-01-05 00:42:25.021306 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 00:42:25.021315 | orchestrator | 2026-01-05 00:42:25.021323 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:42:25.021331 | orchestrator | Monday 05 January 2026 00:42:17 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-01-05 00:42:25.021339 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 00:42:25.021347 | orchestrator | 2026-01-05 00:42:25.021354 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:42:25.021361 | orchestrator | Monday 05 January 2026 00:42:17 +0000 (0:00:00.237) 0:00:00.516 ******** 2026-01-05 00:42:25.021368 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:42:25.021375 | orchestrator | 2026-01-05 00:42:25.021382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021390 | orchestrator | Monday 05 January 2026 00:42:17 +0000 (0:00:00.216) 0:00:00.733 ******** 2026-01-05 00:42:25.021397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:42:25.021404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:42:25.021411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:42:25.021419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:42:25.021423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:42:25.021427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:42:25.021431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:42:25.021434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:42:25.021439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-05 00:42:25.021443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:42:25.021446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:42:25.021450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:42:25.021454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:42:25.021479 | orchestrator | 2026-01-05 00:42:25.021483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021487 | orchestrator | Monday 05 January 2026 00:42:18 +0000 (0:00:00.479) 0:00:01.213 ******** 2026-01-05 00:42:25.021490 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021494 | orchestrator | 2026-01-05 00:42:25.021498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021502 | orchestrator | Monday 05 January 2026 00:42:18 +0000 (0:00:00.236) 0:00:01.450 ******** 2026-01-05 00:42:25.021505 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021509 | orchestrator | 2026-01-05 00:42:25.021513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021517 | orchestrator | Monday 05 January 2026 00:42:18 +0000 (0:00:00.255) 0:00:01.705 ******** 2026-01-05 00:42:25.021521 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021525 | orchestrator | 2026-01-05 00:42:25.021529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021533 | orchestrator | Monday 05 January 2026 00:42:18 +0000 (0:00:00.206) 0:00:01.912 ******** 2026-01-05 00:42:25.021588 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021593 | orchestrator | 2026-01-05 00:42:25.021597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021601 | orchestrator | Monday 05 January 2026 00:42:19 +0000 (0:00:00.201) 0:00:02.113 ******** 2026-01-05 00:42:25.021604 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021608 | orchestrator | 2026-01-05 00:42:25.021612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021616 | orchestrator | Monday 05 January 2026 00:42:19 +0000 (0:00:00.231) 0:00:02.345 ******** 2026-01-05 00:42:25.021620 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021624 | orchestrator | 2026-01-05 00:42:25.021628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021632 | orchestrator | Monday 05 January 2026 00:42:19 +0000 (0:00:00.231) 0:00:02.577 ******** 2026-01-05 00:42:25.021636 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021639 | orchestrator | 2026-01-05 00:42:25.021643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021647 | orchestrator | Monday 05 January 2026 00:42:19 +0000 (0:00:00.218) 0:00:02.795 ******** 2026-01-05 00:42:25.021651 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021654 | orchestrator | 2026-01-05 00:42:25.021658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021662 | orchestrator | Monday 05 January 2026 00:42:20 +0000 (0:00:00.234) 0:00:03.029 ******** 2026-01-05 00:42:25.021666 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11) 2026-01-05 00:42:25.021672 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11) 2026-01-05 00:42:25.021676 | orchestrator | 2026-01-05 00:42:25.021681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021701 | orchestrator | Monday 05 January 2026 00:42:20 +0000 (0:00:00.564) 0:00:03.593 ******** 2026-01-05 00:42:25.021706 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4) 2026-01-05 00:42:25.021711 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4) 2026-01-05 00:42:25.021715 | orchestrator | 2026-01-05 00:42:25.021720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021724 | orchestrator | Monday 05 January 2026 00:42:21 +0000 (0:00:00.691) 0:00:04.284 ******** 2026-01-05 00:42:25.021729 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392) 2026-01-05 00:42:25.021733 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392) 2026-01-05 00:42:25.021743 | orchestrator | 2026-01-05 00:42:25.021747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021752 | orchestrator | Monday 05 January 2026 00:42:22 +0000 (0:00:00.721) 0:00:05.006 ******** 2026-01-05 00:42:25.021757 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0) 2026-01-05 00:42:25.021761 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0) 2026-01-05 00:42:25.021766 | orchestrator | 2026-01-05 00:42:25.021771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:25.021775 | orchestrator | Monday 05 January 2026 00:42:22 +0000 (0:00:00.815) 0:00:05.821 ******** 2026-01-05 00:42:25.021780 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:42:25.021784 | orchestrator | 2026-01-05 00:42:25.021788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:25.021793 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.338) 0:00:06.159 ******** 2026-01-05 00:42:25.021797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:42:25.021802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:42:25.021806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:42:25.021811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:42:25.021815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:42:25.021820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:42:25.021824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:42:25.021829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:42:25.021833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-05 00:42:25.021838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:42:25.021842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:42:25.021865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:42:25.021870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:42:25.021875 | orchestrator | 2026-01-05 00:42:25.021880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:25.021884 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.386) 0:00:06.546 ******** 2026-01-05 00:42:25.021889 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021893 | orchestrator | 2026-01-05 00:42:25.021898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:25.021902 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.208) 0:00:06.754 ******** 2026-01-05 00:42:25.021907 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021911 | orchestrator | 2026-01-05 00:42:25.021915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:25.021920 | orchestrator | Monday 05 January 2026 00:42:23 +0000 (0:00:00.215) 0:00:06.970 ******** 2026-01-05 00:42:25.021924 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021929 | orchestrator | 2026-01-05 00:42:25.021934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:25.021938 | orchestrator | Monday 05 January 2026 00:42:24 +0000 (0:00:00.194) 0:00:07.164 ******** 2026-01-05 00:42:25.021943 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021951 | orchestrator | 2026-01-05 00:42:25.021955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:25.021960 | orchestrator | Monday 05 January 2026 00:42:24 +0000 (0:00:00.212) 0:00:07.376 ******** 2026-01-05 00:42:25.021965 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021969 | orchestrator | 2026-01-05 00:42:25.021973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:25.021978 | orchestrator | Monday 05 January 2026 00:42:24 +0000 (0:00:00.212) 0:00:07.589 ******** 2026-01-05 00:42:25.021983 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.021987 | orchestrator | 2026-01-05 00:42:25.021992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:25.021996 | orchestrator | Monday 05 January 2026 00:42:24 +0000 (0:00:00.219) 0:00:07.809 ******** 2026-01-05 00:42:25.022001 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:25.022005 | orchestrator | 2026-01-05 00:42:25.022094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:33.029849 | orchestrator | Monday 05 January 2026 00:42:25 +0000 (0:00:00.192) 0:00:08.001 ******** 2026-01-05 00:42:33.029979 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.029996 | orchestrator | 2026-01-05 00:42:33.030010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:33.030169 | orchestrator | Monday 05 January 2026 00:42:25 +0000 (0:00:00.224) 0:00:08.226 ******** 2026-01-05 00:42:33.030182 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-05 00:42:33.030194 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-05 00:42:33.030206 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-05 00:42:33.030217 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-05 00:42:33.030227 | orchestrator | 2026-01-05 00:42:33.030239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:33.030249 | orchestrator | Monday 05 January 2026 00:42:26 +0000 (0:00:01.136) 0:00:09.362 ******** 2026-01-05 00:42:33.030260 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.030271 | orchestrator | 2026-01-05 00:42:33.030282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:33.030293 | orchestrator | Monday 05 January 2026 00:42:26 +0000 (0:00:00.194) 0:00:09.557 ******** 2026-01-05 00:42:33.030303 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.030314 | orchestrator | 2026-01-05 00:42:33.030325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:33.030336 | orchestrator | Monday 05 January 2026 00:42:26 +0000 (0:00:00.183) 0:00:09.740 ******** 2026-01-05 00:42:33.030347 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.030360 | orchestrator | 2026-01-05 00:42:33.030373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:33.030386 | orchestrator | Monday 05 January 2026 00:42:26 +0000 (0:00:00.230) 0:00:09.971 ******** 2026-01-05 00:42:33.030398 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.030429 | orchestrator | 2026-01-05 00:42:33.030442 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 00:42:33.030455 | orchestrator | Monday 05 January 2026 00:42:27 +0000 (0:00:00.192) 0:00:10.163 ******** 2026-01-05 00:42:33.030468 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.030480 | orchestrator | 2026-01-05 00:42:33.030493 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 00:42:33.030506 | orchestrator | Monday 05 January 2026 00:42:27 +0000 (0:00:00.129) 0:00:10.292 ******** 2026-01-05 00:42:33.030519 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6123202-7d2d-5b15-b15a-b013203adbfc'}}) 2026-01-05 00:42:33.030596 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'}}) 2026-01-05 00:42:33.030613 | orchestrator | 2026-01-05 00:42:33.030626 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 00:42:33.030664 | orchestrator | Monday 05 January 2026 00:42:27 +0000 (0:00:00.217) 0:00:10.510 ******** 2026-01-05 00:42:33.030680 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'}) 2026-01-05 00:42:33.030695 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'}) 2026-01-05 00:42:33.030709 | orchestrator | 2026-01-05 00:42:33.030723 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 00:42:33.030754 | orchestrator | Monday 05 January 2026 00:42:29 +0000 (0:00:01.955) 0:00:12.466 ******** 2026-01-05 00:42:33.030765 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:33.030777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:33.030788 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.030798 | orchestrator | 2026-01-05 00:42:33.030809 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 00:42:33.030820 | orchestrator | Monday 05 January 2026 00:42:29 +0000 (0:00:00.145) 0:00:12.612 ******** 2026-01-05 00:42:33.030831 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'}) 2026-01-05 00:42:33.030842 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'}) 2026-01-05 00:42:33.030853 | orchestrator | 2026-01-05 00:42:33.030863 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 00:42:33.030875 | orchestrator | Monday 05 January 2026 00:42:31 +0000 (0:00:01.475) 0:00:14.087 ******** 2026-01-05 00:42:33.030886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:33.030896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:33.030907 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.030918 | orchestrator | 2026-01-05 00:42:33.030928 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 00:42:33.030939 | orchestrator | Monday 05 January 2026 00:42:31 +0000 (0:00:00.175) 0:00:14.263 ******** 2026-01-05 00:42:33.030971 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.030983 | orchestrator | 2026-01-05 00:42:33.030994 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 00:42:33.031005 | orchestrator | Monday 05 January 2026 00:42:31 +0000 (0:00:00.133) 0:00:14.397 ******** 2026-01-05 00:42:33.031016 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:33.031026 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:33.031037 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.031048 | orchestrator | 2026-01-05 00:42:33.031058 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 00:42:33.031069 | orchestrator | Monday 05 January 2026 00:42:31 +0000 (0:00:00.282) 0:00:14.679 ******** 2026-01-05 00:42:33.031079 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.031090 | orchestrator | 2026-01-05 00:42:33.031101 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 00:42:33.031111 | orchestrator | Monday 05 January 2026 00:42:31 +0000 (0:00:00.192) 0:00:14.872 ******** 2026-01-05 00:42:33.031132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:33.031143 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:33.031154 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.031164 | orchestrator | 2026-01-05 00:42:33.031175 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 00:42:33.031185 | orchestrator | Monday 05 January 2026 00:42:32 +0000 (0:00:00.154) 0:00:15.027 ******** 2026-01-05 00:42:33.031196 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.031207 | orchestrator | 2026-01-05 00:42:33.031217 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 00:42:33.031228 | orchestrator | Monday 05 January 2026 00:42:32 +0000 (0:00:00.132) 0:00:15.160 ******** 2026-01-05 00:42:33.031239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:33.031249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:33.031260 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.031271 | orchestrator | 2026-01-05 00:42:33.031281 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 00:42:33.031292 | orchestrator | Monday 05 January 2026 00:42:32 +0000 (0:00:00.151) 0:00:15.311 ******** 2026-01-05 00:42:33.031303 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:42:33.031313 | orchestrator | 2026-01-05 00:42:33.031324 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 00:42:33.031335 | orchestrator | Monday 05 January 2026 00:42:32 +0000 (0:00:00.128) 0:00:15.439 ******** 2026-01-05 00:42:33.031346 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:33.031357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:33.031368 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.031379 | orchestrator | 2026-01-05 00:42:33.031389 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 00:42:33.031400 | orchestrator | Monday 05 January 2026 00:42:32 +0000 (0:00:00.142) 0:00:15.582 ******** 2026-01-05 00:42:33.031411 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:33.031428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:33.031439 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.031450 | orchestrator | 2026-01-05 00:42:33.031461 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 00:42:33.031472 | orchestrator | Monday 05 January 2026 00:42:32 +0000 (0:00:00.142) 0:00:15.725 ******** 2026-01-05 00:42:33.031483 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:33.031494 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:33.031504 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.031515 | orchestrator | 2026-01-05 00:42:33.031525 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 00:42:33.031582 | orchestrator | Monday 05 January 2026 00:42:32 +0000 (0:00:00.140) 0:00:15.865 ******** 2026-01-05 00:42:33.031601 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:33.031612 | orchestrator | 2026-01-05 00:42:33.031623 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 00:42:33.031641 | orchestrator | Monday 05 January 2026 00:42:33 +0000 (0:00:00.141) 0:00:16.007 ******** 2026-01-05 00:42:39.743219 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.743370 | orchestrator | 2026-01-05 00:42:39.743394 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 00:42:39.743408 | orchestrator | Monday 05 January 2026 00:42:33 +0000 (0:00:00.164) 0:00:16.172 ******** 2026-01-05 00:42:39.743419 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.743430 | orchestrator | 2026-01-05 00:42:39.743441 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 00:42:39.743452 | orchestrator | Monday 05 January 2026 00:42:33 +0000 (0:00:00.148) 0:00:16.320 ******** 2026-01-05 00:42:39.743463 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:42:39.743475 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 00:42:39.743486 | orchestrator | } 2026-01-05 00:42:39.743497 | orchestrator | 2026-01-05 00:42:39.743508 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 00:42:39.743519 | orchestrator | Monday 05 January 2026 00:42:33 +0000 (0:00:00.370) 0:00:16.691 ******** 2026-01-05 00:42:39.743568 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:42:39.743582 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 00:42:39.743594 | orchestrator | } 2026-01-05 00:42:39.743606 | orchestrator | 2026-01-05 00:42:39.743625 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 00:42:39.743645 | orchestrator | Monday 05 January 2026 00:42:33 +0000 (0:00:00.159) 0:00:16.850 ******** 2026-01-05 00:42:39.743664 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:42:39.743682 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 00:42:39.743694 | orchestrator | } 2026-01-05 00:42:39.743708 | orchestrator | 2026-01-05 00:42:39.743720 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 00:42:39.743733 | orchestrator | Monday 05 January 2026 00:42:34 +0000 (0:00:00.181) 0:00:17.031 ******** 2026-01-05 00:42:39.743746 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:42:39.743759 | orchestrator | 2026-01-05 00:42:39.743771 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 00:42:39.743785 | orchestrator | Monday 05 January 2026 00:42:34 +0000 (0:00:00.670) 0:00:17.702 ******** 2026-01-05 00:42:39.743797 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:42:39.743810 | orchestrator | 2026-01-05 00:42:39.743822 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 00:42:39.743834 | orchestrator | Monday 05 January 2026 00:42:35 +0000 (0:00:00.515) 0:00:18.218 ******** 2026-01-05 00:42:39.743848 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:42:39.743861 | orchestrator | 2026-01-05 00:42:39.743875 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 00:42:39.743889 | orchestrator | Monday 05 January 2026 00:42:35 +0000 (0:00:00.500) 0:00:18.719 ******** 2026-01-05 00:42:39.743901 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:42:39.743914 | orchestrator | 2026-01-05 00:42:39.743927 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 00:42:39.743939 | orchestrator | Monday 05 January 2026 00:42:35 +0000 (0:00:00.158) 0:00:18.877 ******** 2026-01-05 00:42:39.743953 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.743965 | orchestrator | 2026-01-05 00:42:39.743978 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 00:42:39.743992 | orchestrator | Monday 05 January 2026 00:42:36 +0000 (0:00:00.114) 0:00:18.991 ******** 2026-01-05 00:42:39.744005 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744018 | orchestrator | 2026-01-05 00:42:39.744031 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 00:42:39.744072 | orchestrator | Monday 05 January 2026 00:42:36 +0000 (0:00:00.117) 0:00:19.109 ******** 2026-01-05 00:42:39.744114 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:42:39.744132 | orchestrator |  "vgs_report": { 2026-01-05 00:42:39.744144 | orchestrator |  "vg": [] 2026-01-05 00:42:39.744155 | orchestrator |  } 2026-01-05 00:42:39.744166 | orchestrator | } 2026-01-05 00:42:39.744177 | orchestrator | 2026-01-05 00:42:39.744188 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 00:42:39.744199 | orchestrator | Monday 05 January 2026 00:42:36 +0000 (0:00:00.148) 0:00:19.257 ******** 2026-01-05 00:42:39.744210 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744221 | orchestrator | 2026-01-05 00:42:39.744231 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 00:42:39.744242 | orchestrator | Monday 05 January 2026 00:42:36 +0000 (0:00:00.130) 0:00:19.388 ******** 2026-01-05 00:42:39.744253 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744264 | orchestrator | 2026-01-05 00:42:39.744275 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 00:42:39.744285 | orchestrator | Monday 05 January 2026 00:42:36 +0000 (0:00:00.134) 0:00:19.522 ******** 2026-01-05 00:42:39.744296 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744307 | orchestrator | 2026-01-05 00:42:39.744318 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 00:42:39.744329 | orchestrator | Monday 05 January 2026 00:42:36 +0000 (0:00:00.345) 0:00:19.868 ******** 2026-01-05 00:42:39.744340 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744350 | orchestrator | 2026-01-05 00:42:39.744361 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 00:42:39.744372 | orchestrator | Monday 05 January 2026 00:42:37 +0000 (0:00:00.150) 0:00:20.019 ******** 2026-01-05 00:42:39.744383 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744394 | orchestrator | 2026-01-05 00:42:39.744405 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 00:42:39.744416 | orchestrator | Monday 05 January 2026 00:42:37 +0000 (0:00:00.134) 0:00:20.153 ******** 2026-01-05 00:42:39.744426 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744437 | orchestrator | 2026-01-05 00:42:39.744448 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 00:42:39.744459 | orchestrator | Monday 05 January 2026 00:42:37 +0000 (0:00:00.151) 0:00:20.305 ******** 2026-01-05 00:42:39.744470 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744480 | orchestrator | 2026-01-05 00:42:39.744491 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 00:42:39.744502 | orchestrator | Monday 05 January 2026 00:42:37 +0000 (0:00:00.162) 0:00:20.467 ******** 2026-01-05 00:42:39.744563 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744577 | orchestrator | 2026-01-05 00:42:39.744588 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 00:42:39.744599 | orchestrator | Monday 05 January 2026 00:42:37 +0000 (0:00:00.154) 0:00:20.622 ******** 2026-01-05 00:42:39.744610 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744621 | orchestrator | 2026-01-05 00:42:39.744632 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 00:42:39.744643 | orchestrator | Monday 05 January 2026 00:42:37 +0000 (0:00:00.151) 0:00:20.773 ******** 2026-01-05 00:42:39.744654 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744665 | orchestrator | 2026-01-05 00:42:39.744676 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 00:42:39.744686 | orchestrator | Monday 05 January 2026 00:42:37 +0000 (0:00:00.146) 0:00:20.919 ******** 2026-01-05 00:42:39.744697 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744708 | orchestrator | 2026-01-05 00:42:39.744719 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 00:42:39.744736 | orchestrator | Monday 05 January 2026 00:42:38 +0000 (0:00:00.143) 0:00:21.063 ******** 2026-01-05 00:42:39.744769 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744781 | orchestrator | 2026-01-05 00:42:39.744792 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 00:42:39.744803 | orchestrator | Monday 05 January 2026 00:42:38 +0000 (0:00:00.146) 0:00:21.210 ******** 2026-01-05 00:42:39.744814 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744825 | orchestrator | 2026-01-05 00:42:39.744836 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 00:42:39.744847 | orchestrator | Monday 05 January 2026 00:42:38 +0000 (0:00:00.141) 0:00:21.351 ******** 2026-01-05 00:42:39.744857 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744868 | orchestrator | 2026-01-05 00:42:39.744879 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 00:42:39.744890 | orchestrator | Monday 05 January 2026 00:42:38 +0000 (0:00:00.135) 0:00:21.487 ******** 2026-01-05 00:42:39.744902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:39.744914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:39.744925 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.744935 | orchestrator | 2026-01-05 00:42:39.744946 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 00:42:39.744957 | orchestrator | Monday 05 January 2026 00:42:38 +0000 (0:00:00.406) 0:00:21.893 ******** 2026-01-05 00:42:39.744968 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:39.744979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:39.744990 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.745000 | orchestrator | 2026-01-05 00:42:39.745011 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 00:42:39.745022 | orchestrator | Monday 05 January 2026 00:42:39 +0000 (0:00:00.158) 0:00:22.051 ******** 2026-01-05 00:42:39.745033 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:39.745044 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:39.745055 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.745065 | orchestrator | 2026-01-05 00:42:39.745076 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 00:42:39.745087 | orchestrator | Monday 05 January 2026 00:42:39 +0000 (0:00:00.170) 0:00:22.222 ******** 2026-01-05 00:42:39.745098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:39.745108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:39.745119 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.745130 | orchestrator | 2026-01-05 00:42:39.745141 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 00:42:39.745151 | orchestrator | Monday 05 January 2026 00:42:39 +0000 (0:00:00.169) 0:00:22.391 ******** 2026-01-05 00:42:39.745162 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:39.745173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:39.745191 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:39.745202 | orchestrator | 2026-01-05 00:42:39.745212 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 00:42:39.745223 | orchestrator | Monday 05 January 2026 00:42:39 +0000 (0:00:00.169) 0:00:22.561 ******** 2026-01-05 00:42:39.745241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:45.739160 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:45.739295 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:45.739315 | orchestrator | 2026-01-05 00:42:45.739330 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 00:42:45.739344 | orchestrator | Monday 05 January 2026 00:42:39 +0000 (0:00:00.164) 0:00:22.726 ******** 2026-01-05 00:42:45.739355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:45.739367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:45.739379 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:45.739390 | orchestrator | 2026-01-05 00:42:45.739422 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 00:42:45.739436 | orchestrator | Monday 05 January 2026 00:42:39 +0000 (0:00:00.175) 0:00:22.901 ******** 2026-01-05 00:42:45.739447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:45.739460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:45.739472 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:45.739482 | orchestrator | 2026-01-05 00:42:45.739494 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 00:42:45.739506 | orchestrator | Monday 05 January 2026 00:42:40 +0000 (0:00:00.156) 0:00:23.058 ******** 2026-01-05 00:42:45.739517 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:42:45.739564 | orchestrator | 2026-01-05 00:42:45.739576 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 00:42:45.739587 | orchestrator | Monday 05 January 2026 00:42:40 +0000 (0:00:00.561) 0:00:23.619 ******** 2026-01-05 00:42:45.739598 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:42:45.739609 | orchestrator | 2026-01-05 00:42:45.739620 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 00:42:45.739631 | orchestrator | Monday 05 January 2026 00:42:41 +0000 (0:00:00.516) 0:00:24.136 ******** 2026-01-05 00:42:45.739642 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:42:45.739653 | orchestrator | 2026-01-05 00:42:45.739664 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 00:42:45.739677 | orchestrator | Monday 05 January 2026 00:42:41 +0000 (0:00:00.155) 0:00:24.292 ******** 2026-01-05 00:42:45.739689 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'vg_name': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'}) 2026-01-05 00:42:45.739708 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'vg_name': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'}) 2026-01-05 00:42:45.739721 | orchestrator | 2026-01-05 00:42:45.739733 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 00:42:45.739745 | orchestrator | Monday 05 January 2026 00:42:41 +0000 (0:00:00.221) 0:00:24.513 ******** 2026-01-05 00:42:45.739757 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:45.739797 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:45.739811 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:45.739823 | orchestrator | 2026-01-05 00:42:45.739836 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 00:42:45.739848 | orchestrator | Monday 05 January 2026 00:42:41 +0000 (0:00:00.386) 0:00:24.900 ******** 2026-01-05 00:42:45.739860 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:45.739872 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:45.739886 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:45.739907 | orchestrator | 2026-01-05 00:42:45.739928 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 00:42:45.739943 | orchestrator | Monday 05 January 2026 00:42:42 +0000 (0:00:00.189) 0:00:25.089 ******** 2026-01-05 00:42:45.739955 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'})  2026-01-05 00:42:45.739967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'})  2026-01-05 00:42:45.739978 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:42:45.739990 | orchestrator | 2026-01-05 00:42:45.740001 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 00:42:45.740012 | orchestrator | Monday 05 January 2026 00:42:42 +0000 (0:00:00.171) 0:00:25.260 ******** 2026-01-05 00:42:45.740043 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:42:45.740057 | orchestrator |  "lvm_report": { 2026-01-05 00:42:45.740069 | orchestrator |  "lv": [ 2026-01-05 00:42:45.740080 | orchestrator |  { 2026-01-05 00:42:45.740092 | orchestrator |  "lv_name": "osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21", 2026-01-05 00:42:45.740105 | orchestrator |  "vg_name": "ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21" 2026-01-05 00:42:45.740116 | orchestrator |  }, 2026-01-05 00:42:45.740127 | orchestrator |  { 2026-01-05 00:42:45.740139 | orchestrator |  "lv_name": "osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc", 2026-01-05 00:42:45.740150 | orchestrator |  "vg_name": "ceph-f6123202-7d2d-5b15-b15a-b013203adbfc" 2026-01-05 00:42:45.740161 | orchestrator |  } 2026-01-05 00:42:45.740172 | orchestrator |  ], 2026-01-05 00:42:45.740184 | orchestrator |  "pv": [ 2026-01-05 00:42:45.740195 | orchestrator |  { 2026-01-05 00:42:45.740205 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 00:42:45.740217 | orchestrator |  "vg_name": "ceph-f6123202-7d2d-5b15-b15a-b013203adbfc" 2026-01-05 00:42:45.740227 | orchestrator |  }, 2026-01-05 00:42:45.740238 | orchestrator |  { 2026-01-05 00:42:45.740249 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 00:42:45.740259 | orchestrator |  "vg_name": "ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21" 2026-01-05 00:42:45.740270 | orchestrator |  } 2026-01-05 00:42:45.740282 | orchestrator |  ] 2026-01-05 00:42:45.740294 | orchestrator |  } 2026-01-05 00:42:45.740305 | orchestrator | } 2026-01-05 00:42:45.740316 | orchestrator | 2026-01-05 00:42:45.740328 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 00:42:45.740339 | orchestrator | 2026-01-05 00:42:45.740351 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:42:45.740362 | orchestrator | Monday 05 January 2026 00:42:42 +0000 (0:00:00.324) 0:00:25.585 ******** 2026-01-05 00:42:45.740383 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 00:42:45.740395 | orchestrator | 2026-01-05 00:42:45.740406 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:42:45.740417 | orchestrator | Monday 05 January 2026 00:42:42 +0000 (0:00:00.283) 0:00:25.868 ******** 2026-01-05 00:42:45.740428 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:42:45.740439 | orchestrator | 2026-01-05 00:42:45.740451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:45.740462 | orchestrator | Monday 05 January 2026 00:42:43 +0000 (0:00:00.244) 0:00:26.112 ******** 2026-01-05 00:42:45.740472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:42:45.740484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:42:45.740495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:42:45.740507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:42:45.740517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:42:45.740582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:42:45.740597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:42:45.740617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:42:45.740630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-05 00:42:45.740642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:42:45.740654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:42:45.740665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:42:45.740676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:42:45.740687 | orchestrator | 2026-01-05 00:42:45.740699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:45.740711 | orchestrator | Monday 05 January 2026 00:42:43 +0000 (0:00:00.521) 0:00:26.634 ******** 2026-01-05 00:42:45.740722 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:45.740733 | orchestrator | 2026-01-05 00:42:45.740744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:45.740756 | orchestrator | Monday 05 January 2026 00:42:43 +0000 (0:00:00.256) 0:00:26.890 ******** 2026-01-05 00:42:45.740768 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:45.740780 | orchestrator | 2026-01-05 00:42:45.740792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:45.740804 | orchestrator | Monday 05 January 2026 00:42:44 +0000 (0:00:00.217) 0:00:27.107 ******** 2026-01-05 00:42:45.740815 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:45.740827 | orchestrator | 2026-01-05 00:42:45.740839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:45.740850 | orchestrator | Monday 05 January 2026 00:42:45 +0000 (0:00:00.913) 0:00:28.021 ******** 2026-01-05 00:42:45.740862 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:45.740874 | orchestrator | 2026-01-05 00:42:45.740887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:45.740899 | orchestrator | Monday 05 January 2026 00:42:45 +0000 (0:00:00.223) 0:00:28.244 ******** 2026-01-05 00:42:45.740910 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:45.740917 | orchestrator | 2026-01-05 00:42:45.740923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:45.740930 | orchestrator | Monday 05 January 2026 00:42:45 +0000 (0:00:00.239) 0:00:28.484 ******** 2026-01-05 00:42:45.740945 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:45.740952 | orchestrator | 2026-01-05 00:42:45.740968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:57.670670 | orchestrator | Monday 05 January 2026 00:42:45 +0000 (0:00:00.235) 0:00:28.719 ******** 2026-01-05 00:42:57.670824 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.670845 | orchestrator | 2026-01-05 00:42:57.670858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:57.670870 | orchestrator | Monday 05 January 2026 00:42:45 +0000 (0:00:00.210) 0:00:28.929 ******** 2026-01-05 00:42:57.670881 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.670891 | orchestrator | 2026-01-05 00:42:57.670903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:57.670914 | orchestrator | Monday 05 January 2026 00:42:46 +0000 (0:00:00.217) 0:00:29.146 ******** 2026-01-05 00:42:57.670925 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb) 2026-01-05 00:42:57.670937 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb) 2026-01-05 00:42:57.670948 | orchestrator | 2026-01-05 00:42:57.670959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:57.670970 | orchestrator | Monday 05 January 2026 00:42:46 +0000 (0:00:00.513) 0:00:29.660 ******** 2026-01-05 00:42:57.670980 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9) 2026-01-05 00:42:57.670991 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9) 2026-01-05 00:42:57.671002 | orchestrator | 2026-01-05 00:42:57.671013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:57.671024 | orchestrator | Monday 05 January 2026 00:42:47 +0000 (0:00:00.476) 0:00:30.136 ******** 2026-01-05 00:42:57.671034 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff) 2026-01-05 00:42:57.671046 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff) 2026-01-05 00:42:57.671058 | orchestrator | 2026-01-05 00:42:57.671072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:57.671084 | orchestrator | Monday 05 January 2026 00:42:47 +0000 (0:00:00.469) 0:00:30.606 ******** 2026-01-05 00:42:57.671097 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302) 2026-01-05 00:42:57.671110 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302) 2026-01-05 00:42:57.671123 | orchestrator | 2026-01-05 00:42:57.671136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:42:57.671149 | orchestrator | Monday 05 January 2026 00:42:48 +0000 (0:00:00.744) 0:00:31.350 ******** 2026-01-05 00:42:57.671161 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:42:57.671174 | orchestrator | 2026-01-05 00:42:57.671186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671200 | orchestrator | Monday 05 January 2026 00:42:48 +0000 (0:00:00.596) 0:00:31.946 ******** 2026-01-05 00:42:57.671213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:42:57.671227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:42:57.671240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:42:57.671252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:42:57.671265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:42:57.671277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:42:57.671314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:42:57.671328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:42:57.671341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-05 00:42:57.671354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:42:57.671367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:42:57.671381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:42:57.671401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:42:57.671418 | orchestrator | 2026-01-05 00:42:57.671436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671453 | orchestrator | Monday 05 January 2026 00:42:49 +0000 (0:00:00.970) 0:00:32.917 ******** 2026-01-05 00:42:57.671471 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.671490 | orchestrator | 2026-01-05 00:42:57.671508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671563 | orchestrator | Monday 05 January 2026 00:42:50 +0000 (0:00:00.203) 0:00:33.121 ******** 2026-01-05 00:42:57.671576 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.671587 | orchestrator | 2026-01-05 00:42:57.671598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671609 | orchestrator | Monday 05 January 2026 00:42:50 +0000 (0:00:00.224) 0:00:33.345 ******** 2026-01-05 00:42:57.671620 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.671631 | orchestrator | 2026-01-05 00:42:57.671662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671674 | orchestrator | Monday 05 January 2026 00:42:50 +0000 (0:00:00.210) 0:00:33.556 ******** 2026-01-05 00:42:57.671684 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.671695 | orchestrator | 2026-01-05 00:42:57.671706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671716 | orchestrator | Monday 05 January 2026 00:42:50 +0000 (0:00:00.203) 0:00:33.760 ******** 2026-01-05 00:42:57.671727 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.671737 | orchestrator | 2026-01-05 00:42:57.671748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671759 | orchestrator | Monday 05 January 2026 00:42:50 +0000 (0:00:00.197) 0:00:33.957 ******** 2026-01-05 00:42:57.671769 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.671780 | orchestrator | 2026-01-05 00:42:57.671791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671802 | orchestrator | Monday 05 January 2026 00:42:51 +0000 (0:00:00.213) 0:00:34.170 ******** 2026-01-05 00:42:57.671812 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.671823 | orchestrator | 2026-01-05 00:42:57.671834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671844 | orchestrator | Monday 05 January 2026 00:42:51 +0000 (0:00:00.222) 0:00:34.393 ******** 2026-01-05 00:42:57.671855 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.671866 | orchestrator | 2026-01-05 00:42:57.671876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671887 | orchestrator | Monday 05 January 2026 00:42:51 +0000 (0:00:00.216) 0:00:34.610 ******** 2026-01-05 00:42:57.671898 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-05 00:42:57.671909 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-05 00:42:57.671920 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-05 00:42:57.671930 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-05 00:42:57.671941 | orchestrator | 2026-01-05 00:42:57.671952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.671972 | orchestrator | Monday 05 January 2026 00:42:52 +0000 (0:00:00.915) 0:00:35.525 ******** 2026-01-05 00:42:57.671983 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.671994 | orchestrator | 2026-01-05 00:42:57.672005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.672016 | orchestrator | Monday 05 January 2026 00:42:52 +0000 (0:00:00.231) 0:00:35.756 ******** 2026-01-05 00:42:57.672026 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.672037 | orchestrator | 2026-01-05 00:42:57.672048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.672058 | orchestrator | Monday 05 January 2026 00:42:53 +0000 (0:00:00.723) 0:00:36.480 ******** 2026-01-05 00:42:57.672069 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.672079 | orchestrator | 2026-01-05 00:42:57.672090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:42:57.672101 | orchestrator | Monday 05 January 2026 00:42:53 +0000 (0:00:00.200) 0:00:36.681 ******** 2026-01-05 00:42:57.672111 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.672122 | orchestrator | 2026-01-05 00:42:57.672132 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 00:42:57.672161 | orchestrator | Monday 05 January 2026 00:42:53 +0000 (0:00:00.216) 0:00:36.898 ******** 2026-01-05 00:42:57.672181 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.672200 | orchestrator | 2026-01-05 00:42:57.672219 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 00:42:57.672239 | orchestrator | Monday 05 January 2026 00:42:54 +0000 (0:00:00.168) 0:00:37.066 ******** 2026-01-05 00:42:57.672259 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '846bb30c-958c-57a2-8682-0625433ec757'}}) 2026-01-05 00:42:57.672281 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'be99b097-8f9c-5b18-b9e6-1dc57f49383d'}}) 2026-01-05 00:42:57.672297 | orchestrator | 2026-01-05 00:42:57.672308 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 00:42:57.672319 | orchestrator | Monday 05 January 2026 00:42:54 +0000 (0:00:00.217) 0:00:37.284 ******** 2026-01-05 00:42:57.672331 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'}) 2026-01-05 00:42:57.672344 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'}) 2026-01-05 00:42:57.672355 | orchestrator | 2026-01-05 00:42:57.672366 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 00:42:57.672377 | orchestrator | Monday 05 January 2026 00:42:56 +0000 (0:00:01.888) 0:00:39.172 ******** 2026-01-05 00:42:57.672387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:42:57.672400 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:42:57.672410 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:42:57.672421 | orchestrator | 2026-01-05 00:42:57.672432 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 00:42:57.672442 | orchestrator | Monday 05 January 2026 00:42:56 +0000 (0:00:00.186) 0:00:39.359 ******** 2026-01-05 00:42:57.672453 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'}) 2026-01-05 00:42:57.672474 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'}) 2026-01-05 00:43:03.113231 | orchestrator | 2026-01-05 00:43:03.113424 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 00:43:03.113471 | orchestrator | Monday 05 January 2026 00:42:57 +0000 (0:00:01.288) 0:00:40.647 ******** 2026-01-05 00:43:03.113484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:03.113498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:03.113509 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.113560 | orchestrator | 2026-01-05 00:43:03.113573 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 00:43:03.113584 | orchestrator | Monday 05 January 2026 00:42:57 +0000 (0:00:00.149) 0:00:40.797 ******** 2026-01-05 00:43:03.113595 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.113606 | orchestrator | 2026-01-05 00:43:03.113618 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 00:43:03.113628 | orchestrator | Monday 05 January 2026 00:42:57 +0000 (0:00:00.143) 0:00:40.940 ******** 2026-01-05 00:43:03.113640 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:03.113651 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:03.113662 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.113673 | orchestrator | 2026-01-05 00:43:03.113684 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 00:43:03.113694 | orchestrator | Monday 05 January 2026 00:42:58 +0000 (0:00:00.157) 0:00:41.098 ******** 2026-01-05 00:43:03.113705 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.113716 | orchestrator | 2026-01-05 00:43:03.113727 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 00:43:03.113738 | orchestrator | Monday 05 January 2026 00:42:58 +0000 (0:00:00.125) 0:00:41.223 ******** 2026-01-05 00:43:03.113751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:03.113764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:03.113777 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.113790 | orchestrator | 2026-01-05 00:43:03.113803 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 00:43:03.113834 | orchestrator | Monday 05 January 2026 00:42:58 +0000 (0:00:00.292) 0:00:41.516 ******** 2026-01-05 00:43:03.113847 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.113860 | orchestrator | 2026-01-05 00:43:03.113873 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 00:43:03.113886 | orchestrator | Monday 05 January 2026 00:42:58 +0000 (0:00:00.128) 0:00:41.644 ******** 2026-01-05 00:43:03.113899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:03.113911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:03.113923 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.113936 | orchestrator | 2026-01-05 00:43:03.113949 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 00:43:03.113961 | orchestrator | Monday 05 January 2026 00:42:58 +0000 (0:00:00.142) 0:00:41.787 ******** 2026-01-05 00:43:03.113974 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:03.113988 | orchestrator | 2026-01-05 00:43:03.114001 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 00:43:03.114089 | orchestrator | Monday 05 January 2026 00:42:58 +0000 (0:00:00.126) 0:00:41.914 ******** 2026-01-05 00:43:03.114106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:03.114120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:03.114131 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.114142 | orchestrator | 2026-01-05 00:43:03.114153 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 00:43:03.114163 | orchestrator | Monday 05 January 2026 00:42:59 +0000 (0:00:00.149) 0:00:42.063 ******** 2026-01-05 00:43:03.114174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:03.114185 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:03.114196 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.114207 | orchestrator | 2026-01-05 00:43:03.114218 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 00:43:03.114248 | orchestrator | Monday 05 January 2026 00:42:59 +0000 (0:00:00.148) 0:00:42.211 ******** 2026-01-05 00:43:03.114259 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:03.114270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:03.114281 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.114292 | orchestrator | 2026-01-05 00:43:03.114303 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 00:43:03.114313 | orchestrator | Monday 05 January 2026 00:42:59 +0000 (0:00:00.146) 0:00:42.358 ******** 2026-01-05 00:43:03.114324 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.114335 | orchestrator | 2026-01-05 00:43:03.114346 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 00:43:03.114357 | orchestrator | Monday 05 January 2026 00:42:59 +0000 (0:00:00.122) 0:00:42.481 ******** 2026-01-05 00:43:03.114367 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.114378 | orchestrator | 2026-01-05 00:43:03.114389 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 00:43:03.114400 | orchestrator | Monday 05 January 2026 00:42:59 +0000 (0:00:00.137) 0:00:42.619 ******** 2026-01-05 00:43:03.114410 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.114421 | orchestrator | 2026-01-05 00:43:03.114432 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 00:43:03.114443 | orchestrator | Monday 05 January 2026 00:42:59 +0000 (0:00:00.115) 0:00:42.735 ******** 2026-01-05 00:43:03.114454 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:43:03.114465 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 00:43:03.114476 | orchestrator | } 2026-01-05 00:43:03.114487 | orchestrator | 2026-01-05 00:43:03.114498 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 00:43:03.114509 | orchestrator | Monday 05 January 2026 00:42:59 +0000 (0:00:00.136) 0:00:42.871 ******** 2026-01-05 00:43:03.114542 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:43:03.114553 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 00:43:03.114564 | orchestrator | } 2026-01-05 00:43:03.114575 | orchestrator | 2026-01-05 00:43:03.114585 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 00:43:03.114596 | orchestrator | Monday 05 January 2026 00:43:00 +0000 (0:00:00.141) 0:00:43.013 ******** 2026-01-05 00:43:03.114615 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:43:03.114627 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 00:43:03.114638 | orchestrator | } 2026-01-05 00:43:03.114648 | orchestrator | 2026-01-05 00:43:03.114659 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 00:43:03.114670 | orchestrator | Monday 05 January 2026 00:43:00 +0000 (0:00:00.367) 0:00:43.381 ******** 2026-01-05 00:43:03.114681 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:03.114692 | orchestrator | 2026-01-05 00:43:03.114703 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 00:43:03.114714 | orchestrator | Monday 05 January 2026 00:43:00 +0000 (0:00:00.501) 0:00:43.883 ******** 2026-01-05 00:43:03.114725 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:03.114736 | orchestrator | 2026-01-05 00:43:03.114747 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 00:43:03.114758 | orchestrator | Monday 05 January 2026 00:43:01 +0000 (0:00:00.518) 0:00:44.401 ******** 2026-01-05 00:43:03.114769 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:03.114779 | orchestrator | 2026-01-05 00:43:03.114790 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 00:43:03.114801 | orchestrator | Monday 05 January 2026 00:43:01 +0000 (0:00:00.533) 0:00:44.935 ******** 2026-01-05 00:43:03.114812 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:03.114822 | orchestrator | 2026-01-05 00:43:03.114833 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 00:43:03.114844 | orchestrator | Monday 05 January 2026 00:43:02 +0000 (0:00:00.177) 0:00:45.113 ******** 2026-01-05 00:43:03.114855 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.114866 | orchestrator | 2026-01-05 00:43:03.114876 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 00:43:03.114887 | orchestrator | Monday 05 January 2026 00:43:02 +0000 (0:00:00.115) 0:00:45.229 ******** 2026-01-05 00:43:03.114898 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.114909 | orchestrator | 2026-01-05 00:43:03.114920 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 00:43:03.114930 | orchestrator | Monday 05 January 2026 00:43:02 +0000 (0:00:00.145) 0:00:45.374 ******** 2026-01-05 00:43:03.114941 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:43:03.114952 | orchestrator |  "vgs_report": { 2026-01-05 00:43:03.114964 | orchestrator |  "vg": [] 2026-01-05 00:43:03.114975 | orchestrator |  } 2026-01-05 00:43:03.114985 | orchestrator | } 2026-01-05 00:43:03.114996 | orchestrator | 2026-01-05 00:43:03.115007 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 00:43:03.115018 | orchestrator | Monday 05 January 2026 00:43:02 +0000 (0:00:00.148) 0:00:45.523 ******** 2026-01-05 00:43:03.115028 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.115039 | orchestrator | 2026-01-05 00:43:03.115050 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 00:43:03.115060 | orchestrator | Monday 05 January 2026 00:43:02 +0000 (0:00:00.137) 0:00:45.660 ******** 2026-01-05 00:43:03.115071 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.115082 | orchestrator | 2026-01-05 00:43:03.115093 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 00:43:03.115104 | orchestrator | Monday 05 January 2026 00:43:02 +0000 (0:00:00.145) 0:00:45.806 ******** 2026-01-05 00:43:03.115114 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.115125 | orchestrator | 2026-01-05 00:43:03.115136 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 00:43:03.115154 | orchestrator | Monday 05 January 2026 00:43:02 +0000 (0:00:00.146) 0:00:45.953 ******** 2026-01-05 00:43:03.115166 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:03.115177 | orchestrator | 2026-01-05 00:43:03.115194 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 00:43:08.208352 | orchestrator | Monday 05 January 2026 00:43:03 +0000 (0:00:00.142) 0:00:46.095 ******** 2026-01-05 00:43:08.208592 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.208624 | orchestrator | 2026-01-05 00:43:08.208645 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 00:43:08.208664 | orchestrator | Monday 05 January 2026 00:43:03 +0000 (0:00:00.392) 0:00:46.488 ******** 2026-01-05 00:43:08.208682 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.208700 | orchestrator | 2026-01-05 00:43:08.208718 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 00:43:08.208735 | orchestrator | Monday 05 January 2026 00:43:03 +0000 (0:00:00.150) 0:00:46.639 ******** 2026-01-05 00:43:08.208746 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.208755 | orchestrator | 2026-01-05 00:43:08.208765 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 00:43:08.208774 | orchestrator | Monday 05 January 2026 00:43:03 +0000 (0:00:00.155) 0:00:46.795 ******** 2026-01-05 00:43:08.208783 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.208793 | orchestrator | 2026-01-05 00:43:08.208802 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 00:43:08.208811 | orchestrator | Monday 05 January 2026 00:43:03 +0000 (0:00:00.145) 0:00:46.940 ******** 2026-01-05 00:43:08.208820 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.208830 | orchestrator | 2026-01-05 00:43:08.208839 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 00:43:08.208848 | orchestrator | Monday 05 January 2026 00:43:04 +0000 (0:00:00.162) 0:00:47.102 ******** 2026-01-05 00:43:08.208858 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.208867 | orchestrator | 2026-01-05 00:43:08.208877 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 00:43:08.208886 | orchestrator | Monday 05 January 2026 00:43:04 +0000 (0:00:00.148) 0:00:47.251 ******** 2026-01-05 00:43:08.208895 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.208904 | orchestrator | 2026-01-05 00:43:08.208914 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 00:43:08.208923 | orchestrator | Monday 05 January 2026 00:43:04 +0000 (0:00:00.163) 0:00:47.415 ******** 2026-01-05 00:43:08.208932 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.208942 | orchestrator | 2026-01-05 00:43:08.208951 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 00:43:08.208961 | orchestrator | Monday 05 January 2026 00:43:04 +0000 (0:00:00.155) 0:00:47.571 ******** 2026-01-05 00:43:08.208970 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.208979 | orchestrator | 2026-01-05 00:43:08.208989 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 00:43:08.208998 | orchestrator | Monday 05 January 2026 00:43:04 +0000 (0:00:00.154) 0:00:47.725 ******** 2026-01-05 00:43:08.209007 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209017 | orchestrator | 2026-01-05 00:43:08.209026 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 00:43:08.209052 | orchestrator | Monday 05 January 2026 00:43:04 +0000 (0:00:00.156) 0:00:47.881 ******** 2026-01-05 00:43:08.209064 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.209074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:08.209084 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209094 | orchestrator | 2026-01-05 00:43:08.209104 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 00:43:08.209113 | orchestrator | Monday 05 January 2026 00:43:05 +0000 (0:00:00.185) 0:00:48.067 ******** 2026-01-05 00:43:08.209122 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.209141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:08.209151 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209161 | orchestrator | 2026-01-05 00:43:08.209170 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 00:43:08.209180 | orchestrator | Monday 05 January 2026 00:43:05 +0000 (0:00:00.171) 0:00:48.238 ******** 2026-01-05 00:43:08.209189 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.209199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:08.209208 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209218 | orchestrator | 2026-01-05 00:43:08.209227 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 00:43:08.209237 | orchestrator | Monday 05 January 2026 00:43:05 +0000 (0:00:00.414) 0:00:48.653 ******** 2026-01-05 00:43:08.209246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.209256 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:08.209266 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209275 | orchestrator | 2026-01-05 00:43:08.209307 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 00:43:08.209318 | orchestrator | Monday 05 January 2026 00:43:05 +0000 (0:00:00.158) 0:00:48.811 ******** 2026-01-05 00:43:08.209328 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.209337 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:08.209347 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209356 | orchestrator | 2026-01-05 00:43:08.209366 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 00:43:08.209375 | orchestrator | Monday 05 January 2026 00:43:05 +0000 (0:00:00.163) 0:00:48.975 ******** 2026-01-05 00:43:08.209385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.209395 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:08.209405 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209414 | orchestrator | 2026-01-05 00:43:08.209424 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 00:43:08.209433 | orchestrator | Monday 05 January 2026 00:43:06 +0000 (0:00:00.152) 0:00:49.127 ******** 2026-01-05 00:43:08.209443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.209453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:08.209462 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209472 | orchestrator | 2026-01-05 00:43:08.209481 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 00:43:08.209491 | orchestrator | Monday 05 January 2026 00:43:06 +0000 (0:00:00.159) 0:00:49.286 ******** 2026-01-05 00:43:08.209500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.209539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:08.209566 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209583 | orchestrator | 2026-01-05 00:43:08.209601 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 00:43:08.209612 | orchestrator | Monday 05 January 2026 00:43:06 +0000 (0:00:00.174) 0:00:49.461 ******** 2026-01-05 00:43:08.209624 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:08.209640 | orchestrator | 2026-01-05 00:43:08.209656 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 00:43:08.209672 | orchestrator | Monday 05 January 2026 00:43:07 +0000 (0:00:00.536) 0:00:49.998 ******** 2026-01-05 00:43:08.209688 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:08.209705 | orchestrator | 2026-01-05 00:43:08.209721 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 00:43:08.209736 | orchestrator | Monday 05 January 2026 00:43:07 +0000 (0:00:00.516) 0:00:50.514 ******** 2026-01-05 00:43:08.209753 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:08.209770 | orchestrator | 2026-01-05 00:43:08.209784 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 00:43:08.209799 | orchestrator | Monday 05 January 2026 00:43:07 +0000 (0:00:00.147) 0:00:50.662 ******** 2026-01-05 00:43:08.209815 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'vg_name': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'}) 2026-01-05 00:43:08.209833 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'vg_name': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'}) 2026-01-05 00:43:08.209848 | orchestrator | 2026-01-05 00:43:08.209866 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 00:43:08.209882 | orchestrator | Monday 05 January 2026 00:43:07 +0000 (0:00:00.183) 0:00:50.845 ******** 2026-01-05 00:43:08.209899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.209913 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:08.209923 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:08.209932 | orchestrator | 2026-01-05 00:43:08.209947 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 00:43:08.209962 | orchestrator | Monday 05 January 2026 00:43:08 +0000 (0:00:00.175) 0:00:51.021 ******** 2026-01-05 00:43:08.209978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:08.210005 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:14.603843 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:14.603976 | orchestrator | 2026-01-05 00:43:14.603994 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 00:43:14.604007 | orchestrator | Monday 05 January 2026 00:43:08 +0000 (0:00:00.168) 0:00:51.189 ******** 2026-01-05 00:43:14.604019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'})  2026-01-05 00:43:14.604033 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'})  2026-01-05 00:43:14.604044 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:14.604055 | orchestrator | 2026-01-05 00:43:14.604067 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 00:43:14.604107 | orchestrator | Monday 05 January 2026 00:43:08 +0000 (0:00:00.167) 0:00:51.357 ******** 2026-01-05 00:43:14.604120 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:43:14.604131 | orchestrator |  "lvm_report": { 2026-01-05 00:43:14.604144 | orchestrator |  "lv": [ 2026-01-05 00:43:14.604156 | orchestrator |  { 2026-01-05 00:43:14.604167 | orchestrator |  "lv_name": "osd-block-846bb30c-958c-57a2-8682-0625433ec757", 2026-01-05 00:43:14.604179 | orchestrator |  "vg_name": "ceph-846bb30c-958c-57a2-8682-0625433ec757" 2026-01-05 00:43:14.604190 | orchestrator |  }, 2026-01-05 00:43:14.604201 | orchestrator |  { 2026-01-05 00:43:14.604211 | orchestrator |  "lv_name": "osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d", 2026-01-05 00:43:14.604222 | orchestrator |  "vg_name": "ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d" 2026-01-05 00:43:14.604233 | orchestrator |  } 2026-01-05 00:43:14.604244 | orchestrator |  ], 2026-01-05 00:43:14.604255 | orchestrator |  "pv": [ 2026-01-05 00:43:14.604266 | orchestrator |  { 2026-01-05 00:43:14.604276 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 00:43:14.604287 | orchestrator |  "vg_name": "ceph-846bb30c-958c-57a2-8682-0625433ec757" 2026-01-05 00:43:14.604298 | orchestrator |  }, 2026-01-05 00:43:14.604309 | orchestrator |  { 2026-01-05 00:43:14.604320 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 00:43:14.604331 | orchestrator |  "vg_name": "ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d" 2026-01-05 00:43:14.604341 | orchestrator |  } 2026-01-05 00:43:14.604354 | orchestrator |  ] 2026-01-05 00:43:14.604367 | orchestrator |  } 2026-01-05 00:43:14.604381 | orchestrator | } 2026-01-05 00:43:14.604394 | orchestrator | 2026-01-05 00:43:14.604406 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 00:43:14.604419 | orchestrator | 2026-01-05 00:43:14.604432 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:43:14.604445 | orchestrator | Monday 05 January 2026 00:43:09 +0000 (0:00:00.636) 0:00:51.993 ******** 2026-01-05 00:43:14.604457 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 00:43:14.604470 | orchestrator | 2026-01-05 00:43:14.604483 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:43:14.604497 | orchestrator | Monday 05 January 2026 00:43:09 +0000 (0:00:00.268) 0:00:52.262 ******** 2026-01-05 00:43:14.604510 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:14.604586 | orchestrator | 2026-01-05 00:43:14.604600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.604614 | orchestrator | Monday 05 January 2026 00:43:09 +0000 (0:00:00.285) 0:00:52.547 ******** 2026-01-05 00:43:14.604627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:43:14.604641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:43:14.604654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:43:14.604667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:43:14.604680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:43:14.604693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:43:14.604706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:43:14.604718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:43:14.604729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-05 00:43:14.604739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:43:14.604764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:43:14.604776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:43:14.604786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:43:14.604797 | orchestrator | 2026-01-05 00:43:14.604808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.604824 | orchestrator | Monday 05 January 2026 00:43:10 +0000 (0:00:00.465) 0:00:53.013 ******** 2026-01-05 00:43:14.604835 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:14.604846 | orchestrator | 2026-01-05 00:43:14.604857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.604868 | orchestrator | Monday 05 January 2026 00:43:10 +0000 (0:00:00.211) 0:00:53.224 ******** 2026-01-05 00:43:14.604878 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:14.604889 | orchestrator | 2026-01-05 00:43:14.604900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.604930 | orchestrator | Monday 05 January 2026 00:43:10 +0000 (0:00:00.210) 0:00:53.434 ******** 2026-01-05 00:43:14.604941 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:14.604952 | orchestrator | 2026-01-05 00:43:14.604963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.604974 | orchestrator | Monday 05 January 2026 00:43:10 +0000 (0:00:00.294) 0:00:53.729 ******** 2026-01-05 00:43:14.604984 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:14.604995 | orchestrator | 2026-01-05 00:43:14.605006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.605017 | orchestrator | Monday 05 January 2026 00:43:10 +0000 (0:00:00.210) 0:00:53.939 ******** 2026-01-05 00:43:14.605028 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:14.605039 | orchestrator | 2026-01-05 00:43:14.605049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.605060 | orchestrator | Monday 05 January 2026 00:43:11 +0000 (0:00:00.666) 0:00:54.606 ******** 2026-01-05 00:43:14.605071 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:14.605082 | orchestrator | 2026-01-05 00:43:14.605092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.605103 | orchestrator | Monday 05 January 2026 00:43:11 +0000 (0:00:00.224) 0:00:54.830 ******** 2026-01-05 00:43:14.605114 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:14.605124 | orchestrator | 2026-01-05 00:43:14.605135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.605146 | orchestrator | Monday 05 January 2026 00:43:12 +0000 (0:00:00.225) 0:00:55.056 ******** 2026-01-05 00:43:14.605156 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:14.605167 | orchestrator | 2026-01-05 00:43:14.605178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.605189 | orchestrator | Monday 05 January 2026 00:43:12 +0000 (0:00:00.192) 0:00:55.249 ******** 2026-01-05 00:43:14.605200 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078) 2026-01-05 00:43:14.605212 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078) 2026-01-05 00:43:14.605223 | orchestrator | 2026-01-05 00:43:14.605234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.605245 | orchestrator | Monday 05 January 2026 00:43:12 +0000 (0:00:00.407) 0:00:55.657 ******** 2026-01-05 00:43:14.605305 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52) 2026-01-05 00:43:14.605317 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52) 2026-01-05 00:43:14.605328 | orchestrator | 2026-01-05 00:43:14.605339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.605362 | orchestrator | Monday 05 January 2026 00:43:13 +0000 (0:00:00.405) 0:00:56.063 ******** 2026-01-05 00:43:14.605373 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3) 2026-01-05 00:43:14.605384 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3) 2026-01-05 00:43:14.605395 | orchestrator | 2026-01-05 00:43:14.605405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.605416 | orchestrator | Monday 05 January 2026 00:43:13 +0000 (0:00:00.396) 0:00:56.460 ******** 2026-01-05 00:43:14.605427 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c) 2026-01-05 00:43:14.605437 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c) 2026-01-05 00:43:14.605448 | orchestrator | 2026-01-05 00:43:14.605459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:43:14.605469 | orchestrator | Monday 05 January 2026 00:43:13 +0000 (0:00:00.397) 0:00:56.857 ******** 2026-01-05 00:43:14.605480 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:43:14.605491 | orchestrator | 2026-01-05 00:43:14.605502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:14.605512 | orchestrator | Monday 05 January 2026 00:43:14 +0000 (0:00:00.326) 0:00:57.183 ******** 2026-01-05 00:43:14.605541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:43:14.605552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:43:14.605563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:43:14.605574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:43:14.605584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:43:14.605595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:43:14.605605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:43:14.605616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:43:14.605627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-05 00:43:14.605637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:43:14.605648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:43:14.605666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:43:23.702627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:43:23.702741 | orchestrator | 2026-01-05 00:43:23.702757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.702770 | orchestrator | Monday 05 January 2026 00:43:14 +0000 (0:00:00.397) 0:00:57.581 ******** 2026-01-05 00:43:23.702781 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.702792 | orchestrator | 2026-01-05 00:43:23.702803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.702814 | orchestrator | Monday 05 January 2026 00:43:14 +0000 (0:00:00.194) 0:00:57.775 ******** 2026-01-05 00:43:23.702825 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.702836 | orchestrator | 2026-01-05 00:43:23.702846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.702857 | orchestrator | Monday 05 January 2026 00:43:15 +0000 (0:00:00.535) 0:00:58.311 ******** 2026-01-05 00:43:23.702868 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.702901 | orchestrator | 2026-01-05 00:43:23.702913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.702923 | orchestrator | Monday 05 January 2026 00:43:15 +0000 (0:00:00.239) 0:00:58.550 ******** 2026-01-05 00:43:23.702934 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.702944 | orchestrator | 2026-01-05 00:43:23.702955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.702966 | orchestrator | Monday 05 January 2026 00:43:15 +0000 (0:00:00.185) 0:00:58.736 ******** 2026-01-05 00:43:23.702976 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.702987 | orchestrator | 2026-01-05 00:43:23.702997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.703008 | orchestrator | Monday 05 January 2026 00:43:15 +0000 (0:00:00.216) 0:00:58.952 ******** 2026-01-05 00:43:23.703018 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703029 | orchestrator | 2026-01-05 00:43:23.703040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.703050 | orchestrator | Monday 05 January 2026 00:43:16 +0000 (0:00:00.205) 0:00:59.158 ******** 2026-01-05 00:43:23.703061 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703071 | orchestrator | 2026-01-05 00:43:23.703082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.703092 | orchestrator | Monday 05 January 2026 00:43:16 +0000 (0:00:00.188) 0:00:59.347 ******** 2026-01-05 00:43:23.703103 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703113 | orchestrator | 2026-01-05 00:43:23.703125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.703138 | orchestrator | Monday 05 January 2026 00:43:16 +0000 (0:00:00.194) 0:00:59.542 ******** 2026-01-05 00:43:23.703151 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-05 00:43:23.703180 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-05 00:43:23.703195 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-05 00:43:23.703207 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-05 00:43:23.703220 | orchestrator | 2026-01-05 00:43:23.703233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.703245 | orchestrator | Monday 05 January 2026 00:43:17 +0000 (0:00:00.678) 0:01:00.220 ******** 2026-01-05 00:43:23.703257 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703270 | orchestrator | 2026-01-05 00:43:23.703282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.703294 | orchestrator | Monday 05 January 2026 00:43:17 +0000 (0:00:00.196) 0:01:00.417 ******** 2026-01-05 00:43:23.703307 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703319 | orchestrator | 2026-01-05 00:43:23.703332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.703344 | orchestrator | Monday 05 January 2026 00:43:17 +0000 (0:00:00.217) 0:01:00.634 ******** 2026-01-05 00:43:23.703356 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703370 | orchestrator | 2026-01-05 00:43:23.703382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:43:23.703394 | orchestrator | Monday 05 January 2026 00:43:17 +0000 (0:00:00.217) 0:01:00.852 ******** 2026-01-05 00:43:23.703407 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703420 | orchestrator | 2026-01-05 00:43:23.703432 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 00:43:23.703445 | orchestrator | Monday 05 January 2026 00:43:18 +0000 (0:00:00.217) 0:01:01.070 ******** 2026-01-05 00:43:23.703458 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703471 | orchestrator | 2026-01-05 00:43:23.703483 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 00:43:23.703494 | orchestrator | Monday 05 January 2026 00:43:18 +0000 (0:00:00.349) 0:01:01.419 ******** 2026-01-05 00:43:23.703504 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c427200-cd92-5345-a12e-93ab1a68a0a9'}}) 2026-01-05 00:43:23.703556 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0a3b48c-8251-5295-95c4-04cb80bcb769'}}) 2026-01-05 00:43:23.703569 | orchestrator | 2026-01-05 00:43:23.703580 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 00:43:23.703590 | orchestrator | Monday 05 January 2026 00:43:18 +0000 (0:00:00.204) 0:01:01.624 ******** 2026-01-05 00:43:23.703602 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'}) 2026-01-05 00:43:23.703615 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'}) 2026-01-05 00:43:23.703626 | orchestrator | 2026-01-05 00:43:23.703637 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 00:43:23.703665 | orchestrator | Monday 05 January 2026 00:43:20 +0000 (0:00:01.889) 0:01:03.514 ******** 2026-01-05 00:43:23.703676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:23.703688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:23.703699 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703710 | orchestrator | 2026-01-05 00:43:23.703721 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 00:43:23.703731 | orchestrator | Monday 05 January 2026 00:43:20 +0000 (0:00:00.189) 0:01:03.704 ******** 2026-01-05 00:43:23.703743 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'}) 2026-01-05 00:43:23.703754 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'}) 2026-01-05 00:43:23.703765 | orchestrator | 2026-01-05 00:43:23.703775 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 00:43:23.703786 | orchestrator | Monday 05 January 2026 00:43:22 +0000 (0:00:01.321) 0:01:05.025 ******** 2026-01-05 00:43:23.703797 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:23.703807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:23.703818 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703829 | orchestrator | 2026-01-05 00:43:23.703839 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 00:43:23.703850 | orchestrator | Monday 05 January 2026 00:43:22 +0000 (0:00:00.173) 0:01:05.198 ******** 2026-01-05 00:43:23.703860 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703871 | orchestrator | 2026-01-05 00:43:23.703882 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 00:43:23.703893 | orchestrator | Monday 05 January 2026 00:43:22 +0000 (0:00:00.134) 0:01:05.333 ******** 2026-01-05 00:43:23.703904 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:23.703920 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:23.703931 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703942 | orchestrator | 2026-01-05 00:43:23.703952 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 00:43:23.703963 | orchestrator | Monday 05 January 2026 00:43:22 +0000 (0:00:00.164) 0:01:05.497 ******** 2026-01-05 00:43:23.703981 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.703992 | orchestrator | 2026-01-05 00:43:23.704002 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 00:43:23.704013 | orchestrator | Monday 05 January 2026 00:43:22 +0000 (0:00:00.145) 0:01:05.643 ******** 2026-01-05 00:43:23.704024 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:23.704035 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:23.704045 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.704056 | orchestrator | 2026-01-05 00:43:23.704066 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 00:43:23.704077 | orchestrator | Monday 05 January 2026 00:43:22 +0000 (0:00:00.174) 0:01:05.817 ******** 2026-01-05 00:43:23.704088 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.704098 | orchestrator | 2026-01-05 00:43:23.704109 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 00:43:23.704120 | orchestrator | Monday 05 January 2026 00:43:22 +0000 (0:00:00.146) 0:01:05.964 ******** 2026-01-05 00:43:23.704130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:23.704141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:23.704152 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:23.704162 | orchestrator | 2026-01-05 00:43:23.704173 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 00:43:23.704184 | orchestrator | Monday 05 January 2026 00:43:23 +0000 (0:00:00.177) 0:01:06.142 ******** 2026-01-05 00:43:23.704194 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:23.704205 | orchestrator | 2026-01-05 00:43:23.704216 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 00:43:23.704227 | orchestrator | Monday 05 January 2026 00:43:23 +0000 (0:00:00.381) 0:01:06.523 ******** 2026-01-05 00:43:23.704245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:29.888189 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:29.888313 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.888332 | orchestrator | 2026-01-05 00:43:29.888346 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 00:43:29.888359 | orchestrator | Monday 05 January 2026 00:43:23 +0000 (0:00:00.160) 0:01:06.684 ******** 2026-01-05 00:43:29.888371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:29.888383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:29.888395 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.888406 | orchestrator | 2026-01-05 00:43:29.888417 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 00:43:29.888429 | orchestrator | Monday 05 January 2026 00:43:23 +0000 (0:00:00.164) 0:01:06.849 ******** 2026-01-05 00:43:29.888439 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:29.888450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:29.888487 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.888499 | orchestrator | 2026-01-05 00:43:29.888510 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 00:43:29.888584 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.149) 0:01:06.998 ******** 2026-01-05 00:43:29.888595 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.888606 | orchestrator | 2026-01-05 00:43:29.888617 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 00:43:29.888628 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.151) 0:01:07.150 ******** 2026-01-05 00:43:29.888639 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.888650 | orchestrator | 2026-01-05 00:43:29.888661 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 00:43:29.888671 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.151) 0:01:07.301 ******** 2026-01-05 00:43:29.888682 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.888693 | orchestrator | 2026-01-05 00:43:29.888705 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 00:43:29.888718 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.154) 0:01:07.456 ******** 2026-01-05 00:43:29.888731 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:43:29.888744 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 00:43:29.888757 | orchestrator | } 2026-01-05 00:43:29.888770 | orchestrator | 2026-01-05 00:43:29.888783 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 00:43:29.888796 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.181) 0:01:07.637 ******** 2026-01-05 00:43:29.888809 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:43:29.888821 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 00:43:29.888834 | orchestrator | } 2026-01-05 00:43:29.888847 | orchestrator | 2026-01-05 00:43:29.888859 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 00:43:29.888872 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.151) 0:01:07.789 ******** 2026-01-05 00:43:29.888885 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:43:29.888896 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 00:43:29.888907 | orchestrator | } 2026-01-05 00:43:29.888918 | orchestrator | 2026-01-05 00:43:29.888929 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 00:43:29.888940 | orchestrator | Monday 05 January 2026 00:43:24 +0000 (0:00:00.154) 0:01:07.944 ******** 2026-01-05 00:43:29.888951 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:29.888962 | orchestrator | 2026-01-05 00:43:29.888973 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 00:43:29.888984 | orchestrator | Monday 05 January 2026 00:43:25 +0000 (0:00:00.537) 0:01:08.481 ******** 2026-01-05 00:43:29.888994 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:29.889005 | orchestrator | 2026-01-05 00:43:29.889016 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 00:43:29.889027 | orchestrator | Monday 05 January 2026 00:43:25 +0000 (0:00:00.505) 0:01:08.987 ******** 2026-01-05 00:43:29.889038 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:29.889049 | orchestrator | 2026-01-05 00:43:29.889060 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 00:43:29.889071 | orchestrator | Monday 05 January 2026 00:43:26 +0000 (0:00:00.772) 0:01:09.759 ******** 2026-01-05 00:43:29.889081 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:29.889092 | orchestrator | 2026-01-05 00:43:29.889103 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 00:43:29.889114 | orchestrator | Monday 05 January 2026 00:43:26 +0000 (0:00:00.157) 0:01:09.917 ******** 2026-01-05 00:43:29.889125 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889136 | orchestrator | 2026-01-05 00:43:29.889147 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 00:43:29.889167 | orchestrator | Monday 05 January 2026 00:43:27 +0000 (0:00:00.123) 0:01:10.040 ******** 2026-01-05 00:43:29.889178 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889189 | orchestrator | 2026-01-05 00:43:29.889200 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 00:43:29.889211 | orchestrator | Monday 05 January 2026 00:43:27 +0000 (0:00:00.119) 0:01:10.160 ******** 2026-01-05 00:43:29.889222 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:43:29.889233 | orchestrator |  "vgs_report": { 2026-01-05 00:43:29.889245 | orchestrator |  "vg": [] 2026-01-05 00:43:29.889274 | orchestrator |  } 2026-01-05 00:43:29.889286 | orchestrator | } 2026-01-05 00:43:29.889297 | orchestrator | 2026-01-05 00:43:29.889308 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 00:43:29.889319 | orchestrator | Monday 05 January 2026 00:43:27 +0000 (0:00:00.161) 0:01:10.321 ******** 2026-01-05 00:43:29.889330 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889341 | orchestrator | 2026-01-05 00:43:29.889352 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 00:43:29.889363 | orchestrator | Monday 05 January 2026 00:43:27 +0000 (0:00:00.131) 0:01:10.453 ******** 2026-01-05 00:43:29.889374 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889384 | orchestrator | 2026-01-05 00:43:29.889395 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 00:43:29.889406 | orchestrator | Monday 05 January 2026 00:43:27 +0000 (0:00:00.163) 0:01:10.616 ******** 2026-01-05 00:43:29.889417 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889428 | orchestrator | 2026-01-05 00:43:29.889439 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 00:43:29.889450 | orchestrator | Monday 05 January 2026 00:43:27 +0000 (0:00:00.134) 0:01:10.751 ******** 2026-01-05 00:43:29.889461 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889472 | orchestrator | 2026-01-05 00:43:29.889483 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 00:43:29.889493 | orchestrator | Monday 05 January 2026 00:43:27 +0000 (0:00:00.141) 0:01:10.892 ******** 2026-01-05 00:43:29.889504 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889533 | orchestrator | 2026-01-05 00:43:29.889545 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 00:43:29.889557 | orchestrator | Monday 05 January 2026 00:43:28 +0000 (0:00:00.137) 0:01:11.029 ******** 2026-01-05 00:43:29.889567 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889578 | orchestrator | 2026-01-05 00:43:29.889611 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 00:43:29.889623 | orchestrator | Monday 05 January 2026 00:43:28 +0000 (0:00:00.120) 0:01:11.150 ******** 2026-01-05 00:43:29.889634 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889645 | orchestrator | 2026-01-05 00:43:29.889656 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 00:43:29.889668 | orchestrator | Monday 05 January 2026 00:43:28 +0000 (0:00:00.131) 0:01:11.282 ******** 2026-01-05 00:43:29.889679 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889690 | orchestrator | 2026-01-05 00:43:29.889701 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 00:43:29.889712 | orchestrator | Monday 05 January 2026 00:43:28 +0000 (0:00:00.282) 0:01:11.564 ******** 2026-01-05 00:43:29.889723 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889733 | orchestrator | 2026-01-05 00:43:29.889749 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 00:43:29.889761 | orchestrator | Monday 05 January 2026 00:43:28 +0000 (0:00:00.145) 0:01:11.709 ******** 2026-01-05 00:43:29.889772 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889783 | orchestrator | 2026-01-05 00:43:29.889794 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 00:43:29.889805 | orchestrator | Monday 05 January 2026 00:43:28 +0000 (0:00:00.151) 0:01:11.860 ******** 2026-01-05 00:43:29.889823 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889834 | orchestrator | 2026-01-05 00:43:29.889845 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 00:43:29.889856 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.151) 0:01:12.012 ******** 2026-01-05 00:43:29.889867 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889878 | orchestrator | 2026-01-05 00:43:29.889889 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 00:43:29.889900 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.129) 0:01:12.141 ******** 2026-01-05 00:43:29.889911 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889922 | orchestrator | 2026-01-05 00:43:29.889933 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 00:43:29.889944 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.138) 0:01:12.279 ******** 2026-01-05 00:43:29.889955 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.889966 | orchestrator | 2026-01-05 00:43:29.889977 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 00:43:29.889988 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.151) 0:01:12.431 ******** 2026-01-05 00:43:29.889999 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:29.890010 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:29.890091 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.890103 | orchestrator | 2026-01-05 00:43:29.890114 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 00:43:29.890125 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.151) 0:01:12.583 ******** 2026-01-05 00:43:29.890136 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:29.890147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:29.890158 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:29.890169 | orchestrator | 2026-01-05 00:43:29.890180 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 00:43:29.890191 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.139) 0:01:12.722 ******** 2026-01-05 00:43:29.890210 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:32.931370 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:32.931484 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:32.931596 | orchestrator | 2026-01-05 00:43:32.931613 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 00:43:32.931627 | orchestrator | Monday 05 January 2026 00:43:29 +0000 (0:00:00.148) 0:01:12.871 ******** 2026-01-05 00:43:32.931638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:32.931650 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:32.931661 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:32.931672 | orchestrator | 2026-01-05 00:43:32.931684 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 00:43:32.931695 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.141) 0:01:13.012 ******** 2026-01-05 00:43:32.931737 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:32.931749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:32.931760 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:32.931771 | orchestrator | 2026-01-05 00:43:32.931782 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 00:43:32.931793 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.154) 0:01:13.166 ******** 2026-01-05 00:43:32.931804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:32.931815 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:32.931845 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:32.931864 | orchestrator | 2026-01-05 00:43:32.931882 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 00:43:32.931902 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.279) 0:01:13.446 ******** 2026-01-05 00:43:32.931922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:32.931941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:32.931956 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:32.931969 | orchestrator | 2026-01-05 00:43:32.931981 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 00:43:32.931994 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.145) 0:01:13.591 ******** 2026-01-05 00:43:32.932006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:32.932019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:32.932031 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:32.932043 | orchestrator | 2026-01-05 00:43:32.932055 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 00:43:32.932067 | orchestrator | Monday 05 January 2026 00:43:30 +0000 (0:00:00.148) 0:01:13.740 ******** 2026-01-05 00:43:32.932080 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:32.932093 | orchestrator | 2026-01-05 00:43:32.932105 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 00:43:32.932117 | orchestrator | Monday 05 January 2026 00:43:31 +0000 (0:00:00.544) 0:01:14.284 ******** 2026-01-05 00:43:32.932130 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:32.932142 | orchestrator | 2026-01-05 00:43:32.932154 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 00:43:32.932166 | orchestrator | Monday 05 January 2026 00:43:31 +0000 (0:00:00.519) 0:01:14.804 ******** 2026-01-05 00:43:32.932178 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:32.932190 | orchestrator | 2026-01-05 00:43:32.932203 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 00:43:32.932215 | orchestrator | Monday 05 January 2026 00:43:31 +0000 (0:00:00.155) 0:01:14.960 ******** 2026-01-05 00:43:32.932228 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'vg_name': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'}) 2026-01-05 00:43:32.932243 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'vg_name': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'}) 2026-01-05 00:43:32.932262 | orchestrator | 2026-01-05 00:43:32.932273 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 00:43:32.932284 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:00.230) 0:01:15.191 ******** 2026-01-05 00:43:32.932313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:32.932325 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:32.932336 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:32.932347 | orchestrator | 2026-01-05 00:43:32.932358 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 00:43:32.932369 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:00.173) 0:01:15.364 ******** 2026-01-05 00:43:32.932380 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:32.932391 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:32.932402 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:32.932413 | orchestrator | 2026-01-05 00:43:32.932424 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 00:43:32.932435 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:00.174) 0:01:15.538 ******** 2026-01-05 00:43:32.932446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'})  2026-01-05 00:43:32.932457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'})  2026-01-05 00:43:32.932467 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:32.932478 | orchestrator | 2026-01-05 00:43:32.932489 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 00:43:32.932499 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:00.198) 0:01:15.737 ******** 2026-01-05 00:43:32.932510 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:43:32.932550 | orchestrator |  "lvm_report": { 2026-01-05 00:43:32.932563 | orchestrator |  "lv": [ 2026-01-05 00:43:32.932574 | orchestrator |  { 2026-01-05 00:43:32.932585 | orchestrator |  "lv_name": "osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9", 2026-01-05 00:43:32.932603 | orchestrator |  "vg_name": "ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9" 2026-01-05 00:43:32.932614 | orchestrator |  }, 2026-01-05 00:43:32.932625 | orchestrator |  { 2026-01-05 00:43:32.932636 | orchestrator |  "lv_name": "osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769", 2026-01-05 00:43:32.932647 | orchestrator |  "vg_name": "ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769" 2026-01-05 00:43:32.932658 | orchestrator |  } 2026-01-05 00:43:32.932669 | orchestrator |  ], 2026-01-05 00:43:32.932679 | orchestrator |  "pv": [ 2026-01-05 00:43:32.932690 | orchestrator |  { 2026-01-05 00:43:32.932701 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 00:43:32.932711 | orchestrator |  "vg_name": "ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9" 2026-01-05 00:43:32.932722 | orchestrator |  }, 2026-01-05 00:43:32.932733 | orchestrator |  { 2026-01-05 00:43:32.932744 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 00:43:32.932755 | orchestrator |  "vg_name": "ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769" 2026-01-05 00:43:32.932766 | orchestrator |  } 2026-01-05 00:43:32.932776 | orchestrator |  ] 2026-01-05 00:43:32.932787 | orchestrator |  } 2026-01-05 00:43:32.932798 | orchestrator | } 2026-01-05 00:43:32.932817 | orchestrator | 2026-01-05 00:43:32.932829 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:43:32.932840 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 00:43:32.932851 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 00:43:32.932862 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 00:43:32.932873 | orchestrator | 2026-01-05 00:43:32.932884 | orchestrator | 2026-01-05 00:43:32.932894 | orchestrator | 2026-01-05 00:43:32.932905 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:43:32.932916 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:00.149) 0:01:15.887 ******** 2026-01-05 00:43:32.932926 | orchestrator | =============================================================================== 2026-01-05 00:43:32.932937 | orchestrator | Create block VGs -------------------------------------------------------- 5.73s 2026-01-05 00:43:32.932948 | orchestrator | Create block LVs -------------------------------------------------------- 4.09s 2026-01-05 00:43:32.932958 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.81s 2026-01-05 00:43:32.932969 | orchestrator | Add known partitions to the list of available block devices ------------- 1.76s 2026-01-05 00:43:32.932980 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.71s 2026-01-05 00:43:32.932991 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.64s 2026-01-05 00:43:32.933001 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2026-01-05 00:43:32.933012 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2026-01-05 00:43:32.933031 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2026-01-05 00:43:33.409107 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2026-01-05 00:43:33.409220 | orchestrator | Print LVM report data --------------------------------------------------- 1.11s 2026-01-05 00:43:33.409235 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-01-05 00:43:33.409247 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2026-01-05 00:43:33.409258 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2026-01-05 00:43:33.409269 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-01-05 00:43:33.409279 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2026-01-05 00:43:33.409290 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-01-05 00:43:33.409301 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.74s 2026-01-05 00:43:33.409312 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.74s 2026-01-05 00:43:33.409323 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.73s 2026-01-05 00:43:45.585481 | orchestrator | 2026-01-05 00:43:45 | INFO  | Task 14703096-adba-4e76-bd1c-65c8e9e39353 (facts) was prepared for execution. 2026-01-05 00:43:45.585567 | orchestrator | 2026-01-05 00:43:45 | INFO  | It takes a moment until task 14703096-adba-4e76-bd1c-65c8e9e39353 (facts) has been started and output is visible here. 2026-01-05 00:43:58.367204 | orchestrator | 2026-01-05 00:43:58.367342 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 00:43:58.367374 | orchestrator | 2026-01-05 00:43:58.367401 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:43:58.367421 | orchestrator | Monday 05 January 2026 00:43:50 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-01-05 00:43:58.367474 | orchestrator | ok: [testbed-manager] 2026-01-05 00:43:58.367558 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:43:58.367579 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:43:58.367597 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:43:58.367616 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:58.367628 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:58.367639 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:58.367649 | orchestrator | 2026-01-05 00:43:58.367660 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:43:58.367701 | orchestrator | Monday 05 January 2026 00:43:51 +0000 (0:00:01.107) 0:00:01.390 ******** 2026-01-05 00:43:58.367722 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:43:58.367742 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:43:58.367760 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:43:58.367780 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:43:58.367801 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:58.367820 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:58.367838 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:58.367857 | orchestrator | 2026-01-05 00:43:58.367878 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:43:58.367897 | orchestrator | 2026-01-05 00:43:58.367916 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:43:58.367935 | orchestrator | Monday 05 January 2026 00:43:52 +0000 (0:00:01.302) 0:00:02.692 ******** 2026-01-05 00:43:58.367954 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:43:58.367975 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:43:58.367994 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:43:58.368013 | orchestrator | ok: [testbed-manager] 2026-01-05 00:43:58.368026 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:58.368037 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:58.368048 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:58.368059 | orchestrator | 2026-01-05 00:43:58.368069 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:43:58.368080 | orchestrator | 2026-01-05 00:43:58.368091 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:43:58.368102 | orchestrator | Monday 05 January 2026 00:43:57 +0000 (0:00:04.847) 0:00:07.540 ******** 2026-01-05 00:43:58.368112 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:43:58.368123 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:43:58.368160 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:43:58.368171 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:43:58.368182 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:58.368193 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:58.368203 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:58.368214 | orchestrator | 2026-01-05 00:43:58.368225 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:43:58.368237 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:43:58.368250 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:43:58.368261 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:43:58.368272 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:43:58.368283 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:43:58.368293 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:43:58.368304 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:43:58.368327 | orchestrator | 2026-01-05 00:43:58.368339 | orchestrator | 2026-01-05 00:43:58.368350 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:43:58.368361 | orchestrator | Monday 05 January 2026 00:43:57 +0000 (0:00:00.553) 0:00:08.093 ******** 2026-01-05 00:43:58.368371 | orchestrator | =============================================================================== 2026-01-05 00:43:58.368382 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2026-01-05 00:43:58.368393 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.30s 2026-01-05 00:43:58.368404 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2026-01-05 00:43:58.368415 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-01-05 00:44:10.977579 | orchestrator | 2026-01-05 00:44:10 | INFO  | Task fb4e816d-3895-412e-a5dd-d93b2b772feb (frr) was prepared for execution. 2026-01-05 00:44:10.977673 | orchestrator | 2026-01-05 00:44:10 | INFO  | It takes a moment until task fb4e816d-3895-412e-a5dd-d93b2b772feb (frr) has been started and output is visible here. 2026-01-05 00:44:38.397885 | orchestrator | 2026-01-05 00:44:38.398107 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-05 00:44:38.398138 | orchestrator | 2026-01-05 00:44:38.398160 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-05 00:44:38.398180 | orchestrator | Monday 05 January 2026 00:44:15 +0000 (0:00:00.217) 0:00:00.217 ******** 2026-01-05 00:44:38.398201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:44:38.398223 | orchestrator | 2026-01-05 00:44:38.398242 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-05 00:44:38.398262 | orchestrator | Monday 05 January 2026 00:44:15 +0000 (0:00:00.199) 0:00:00.416 ******** 2026-01-05 00:44:38.398282 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:38.398303 | orchestrator | 2026-01-05 00:44:38.398323 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-05 00:44:38.398344 | orchestrator | Monday 05 January 2026 00:44:16 +0000 (0:00:01.177) 0:00:01.593 ******** 2026-01-05 00:44:38.398368 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:38.398389 | orchestrator | 2026-01-05 00:44:38.398410 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-05 00:44:38.398455 | orchestrator | Monday 05 January 2026 00:44:27 +0000 (0:00:11.373) 0:00:12.967 ******** 2026-01-05 00:44:38.398477 | orchestrator | ok: [testbed-manager] 2026-01-05 00:44:38.398497 | orchestrator | 2026-01-05 00:44:38.398516 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-05 00:44:38.398535 | orchestrator | Monday 05 January 2026 00:44:28 +0000 (0:00:01.059) 0:00:14.026 ******** 2026-01-05 00:44:38.398559 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:38.398577 | orchestrator | 2026-01-05 00:44:38.398596 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-05 00:44:38.398614 | orchestrator | Monday 05 January 2026 00:44:29 +0000 (0:00:00.986) 0:00:15.013 ******** 2026-01-05 00:44:38.398631 | orchestrator | ok: [testbed-manager] 2026-01-05 00:44:38.398648 | orchestrator | 2026-01-05 00:44:38.398666 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-05 00:44:38.398686 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:01.282) 0:00:16.295 ******** 2026-01-05 00:44:38.398704 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:44:38.398720 | orchestrator | 2026-01-05 00:44:38.398737 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-05 00:44:38.398755 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:00.152) 0:00:16.448 ******** 2026-01-05 00:44:38.398803 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:44:38.398853 | orchestrator | 2026-01-05 00:44:38.398871 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-05 00:44:38.398890 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:00.175) 0:00:16.623 ******** 2026-01-05 00:44:38.398906 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:38.398921 | orchestrator | 2026-01-05 00:44:38.398938 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-05 00:44:38.398955 | orchestrator | Monday 05 January 2026 00:44:32 +0000 (0:00:01.022) 0:00:17.646 ******** 2026-01-05 00:44:38.398972 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-05 00:44:38.398990 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-05 00:44:38.399007 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-05 00:44:38.399023 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-05 00:44:38.399039 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-05 00:44:38.399056 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-05 00:44:38.399071 | orchestrator | 2026-01-05 00:44:38.399087 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-05 00:44:38.399105 | orchestrator | Monday 05 January 2026 00:44:34 +0000 (0:00:02.370) 0:00:20.016 ******** 2026-01-05 00:44:38.399121 | orchestrator | ok: [testbed-manager] 2026-01-05 00:44:38.399138 | orchestrator | 2026-01-05 00:44:38.399154 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-05 00:44:38.399171 | orchestrator | Monday 05 January 2026 00:44:36 +0000 (0:00:01.751) 0:00:21.767 ******** 2026-01-05 00:44:38.399188 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:38.399204 | orchestrator | 2026-01-05 00:44:38.399219 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:44:38.399238 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:38.399257 | orchestrator | 2026-01-05 00:44:38.399273 | orchestrator | 2026-01-05 00:44:38.399289 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:44:38.399306 | orchestrator | Monday 05 January 2026 00:44:38 +0000 (0:00:01.495) 0:00:23.262 ******** 2026-01-05 00:44:38.399322 | orchestrator | =============================================================================== 2026-01-05 00:44:38.399339 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.37s 2026-01-05 00:44:38.399355 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.37s 2026-01-05 00:44:38.399372 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.75s 2026-01-05 00:44:38.399388 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.50s 2026-01-05 00:44:38.399404 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.28s 2026-01-05 00:44:38.399505 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.18s 2026-01-05 00:44:38.399526 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.06s 2026-01-05 00:44:38.399544 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.02s 2026-01-05 00:44:38.399562 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.99s 2026-01-05 00:44:38.399577 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-01-05 00:44:38.399593 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-01-05 00:44:38.399609 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-01-05 00:44:38.742225 | orchestrator | 2026-01-05 00:44:38.745974 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jan 5 00:44:38 UTC 2026 2026-01-05 00:44:38.746085 | orchestrator | 2026-01-05 00:44:40.770985 | orchestrator | 2026-01-05 00:44:40 | INFO  | Collection nutshell is prepared for execution 2026-01-05 00:44:40.771088 | orchestrator | 2026-01-05 00:44:40 | INFO  | A [0] - dotfiles 2026-01-05 00:44:50.839960 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [0] - homer 2026-01-05 00:44:50.840088 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [0] - netdata 2026-01-05 00:44:50.840095 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [0] - openstackclient 2026-01-05 00:44:50.840107 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [0] - phpmyadmin 2026-01-05 00:44:50.840226 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [0] - common 2026-01-05 00:44:50.845074 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [1] -- loadbalancer 2026-01-05 00:44:50.845186 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [2] --- opensearch 2026-01-05 00:44:50.845195 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [2] --- mariadb-ng 2026-01-05 00:44:50.845199 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [3] ---- horizon 2026-01-05 00:44:50.845493 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [3] ---- keystone 2026-01-05 00:44:50.845780 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [4] ----- neutron 2026-01-05 00:44:50.846109 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [5] ------ wait-for-nova 2026-01-05 00:44:50.846118 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [6] ------- octavia 2026-01-05 00:44:50.847805 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [4] ----- barbican 2026-01-05 00:44:50.847829 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [4] ----- designate 2026-01-05 00:44:50.847983 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [4] ----- ironic 2026-01-05 00:44:50.848520 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [4] ----- placement 2026-01-05 00:44:50.848604 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [4] ----- magnum 2026-01-05 00:44:50.849104 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [1] -- openvswitch 2026-01-05 00:44:50.849142 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [2] --- ovn 2026-01-05 00:44:50.849460 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [1] -- memcached 2026-01-05 00:44:50.849858 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [1] -- redis 2026-01-05 00:44:50.849883 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [1] -- rabbitmq-ng 2026-01-05 00:44:50.850164 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [0] - kubernetes 2026-01-05 00:44:50.853401 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [1] -- kubeconfig 2026-01-05 00:44:50.853476 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [1] -- copy-kubeconfig 2026-01-05 00:44:50.853488 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [0] - ceph 2026-01-05 00:44:50.856291 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [1] -- ceph-pools 2026-01-05 00:44:50.856336 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [2] --- copy-ceph-keys 2026-01-05 00:44:50.856350 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [3] ---- cephclient 2026-01-05 00:44:50.856361 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-05 00:44:50.856664 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [4] ----- wait-for-keystone 2026-01-05 00:44:50.856683 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-05 00:44:50.857578 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [5] ------ glance 2026-01-05 00:44:50.857596 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [5] ------ cinder 2026-01-05 00:44:50.857632 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [5] ------ nova 2026-01-05 00:44:50.857641 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [4] ----- prometheus 2026-01-05 00:44:50.857651 | orchestrator | 2026-01-05 00:44:50 | INFO  | A [5] ------ grafana 2026-01-05 00:44:51.106932 | orchestrator | 2026-01-05 00:44:51 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-05 00:44:51.107042 | orchestrator | 2026-01-05 00:44:51 | INFO  | Tasks are running in the background 2026-01-05 00:44:54.295333 | orchestrator | 2026-01-05 00:44:54 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-05 00:44:56.457806 | orchestrator | 2026-01-05 00:44:56 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:44:56.457968 | orchestrator | 2026-01-05 00:44:56 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:44:56.458673 | orchestrator | 2026-01-05 00:44:56 | INFO  | Task 5a3b6001-3291-4758-8d03-31b083800efd is in state STARTED 2026-01-05 00:44:56.464392 | orchestrator | 2026-01-05 00:44:56 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:44:56.465013 | orchestrator | 2026-01-05 00:44:56 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:44:56.468307 | orchestrator | 2026-01-05 00:44:56 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:44:56.469000 | orchestrator | 2026-01-05 00:44:56 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:44:56.469096 | orchestrator | 2026-01-05 00:44:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:44:59.503076 | orchestrator | 2026-01-05 00:44:59 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:44:59.503217 | orchestrator | 2026-01-05 00:44:59 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:44:59.505862 | orchestrator | 2026-01-05 00:44:59 | INFO  | Task 5a3b6001-3291-4758-8d03-31b083800efd is in state STARTED 2026-01-05 00:44:59.507965 | orchestrator | 2026-01-05 00:44:59 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:44:59.508391 | orchestrator | 2026-01-05 00:44:59 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:44:59.509267 | orchestrator | 2026-01-05 00:44:59 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:44:59.509699 | orchestrator | 2026-01-05 00:44:59 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:44:59.509733 | orchestrator | 2026-01-05 00:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:02.553126 | orchestrator | 2026-01-05 00:45:02 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:02.553352 | orchestrator | 2026-01-05 00:45:02 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:02.553389 | orchestrator | 2026-01-05 00:45:02 | INFO  | Task 5a3b6001-3291-4758-8d03-31b083800efd is in state STARTED 2026-01-05 00:45:02.553832 | orchestrator | 2026-01-05 00:45:02 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:02.555712 | orchestrator | 2026-01-05 00:45:02 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:02.555931 | orchestrator | 2026-01-05 00:45:02 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:02.556388 | orchestrator | 2026-01-05 00:45:02 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:02.556497 | orchestrator | 2026-01-05 00:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:05.662861 | orchestrator | 2026-01-05 00:45:05 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:05.662955 | orchestrator | 2026-01-05 00:45:05 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:05.675453 | orchestrator | 2026-01-05 00:45:05 | INFO  | Task 5a3b6001-3291-4758-8d03-31b083800efd is in state STARTED 2026-01-05 00:45:05.675563 | orchestrator | 2026-01-05 00:45:05 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:05.675578 | orchestrator | 2026-01-05 00:45:05 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:05.675590 | orchestrator | 2026-01-05 00:45:05 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:05.675601 | orchestrator | 2026-01-05 00:45:05 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:05.675613 | orchestrator | 2026-01-05 00:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:08.697755 | orchestrator | 2026-01-05 00:45:08 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:08.699045 | orchestrator | 2026-01-05 00:45:08 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:08.699295 | orchestrator | 2026-01-05 00:45:08 | INFO  | Task 5a3b6001-3291-4758-8d03-31b083800efd is in state STARTED 2026-01-05 00:45:08.699826 | orchestrator | 2026-01-05 00:45:08 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:08.700381 | orchestrator | 2026-01-05 00:45:08 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:08.700912 | orchestrator | 2026-01-05 00:45:08 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:08.701540 | orchestrator | 2026-01-05 00:45:08 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:08.701859 | orchestrator | 2026-01-05 00:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:11.761674 | orchestrator | 2026-01-05 00:45:11 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:11.761782 | orchestrator | 2026-01-05 00:45:11 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:11.763495 | orchestrator | 2026-01-05 00:45:11 | INFO  | Task 5a3b6001-3291-4758-8d03-31b083800efd is in state STARTED 2026-01-05 00:45:11.763773 | orchestrator | 2026-01-05 00:45:11 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:11.766282 | orchestrator | 2026-01-05 00:45:11 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:11.766916 | orchestrator | 2026-01-05 00:45:11 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:11.769372 | orchestrator | 2026-01-05 00:45:11 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:11.769439 | orchestrator | 2026-01-05 00:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:14.902533 | orchestrator | 2026-01-05 00:45:14 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:14.902665 | orchestrator | 2026-01-05 00:45:14 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:14.902687 | orchestrator | 2026-01-05 00:45:14 | INFO  | Task 5a3b6001-3291-4758-8d03-31b083800efd is in state STARTED 2026-01-05 00:45:14.902791 | orchestrator | 2026-01-05 00:45:14 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:14.902804 | orchestrator | 2026-01-05 00:45:14 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:14.902813 | orchestrator | 2026-01-05 00:45:14 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:14.902822 | orchestrator | 2026-01-05 00:45:14 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:14.902831 | orchestrator | 2026-01-05 00:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:17.871036 | orchestrator | 2026-01-05 00:45:17 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:17.877039 | orchestrator | 2026-01-05 00:45:17 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:17.879824 | orchestrator | 2026-01-05 00:45:17 | INFO  | Task 5a3b6001-3291-4758-8d03-31b083800efd is in state STARTED 2026-01-05 00:45:17.887366 | orchestrator | 2026-01-05 00:45:17 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:17.891442 | orchestrator | 2026-01-05 00:45:17 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:17.891523 | orchestrator | 2026-01-05 00:45:17 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:17.894101 | orchestrator | 2026-01-05 00:45:17 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:17.896362 | orchestrator | 2026-01-05 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:20.972922 | orchestrator | 2026-01-05 00:45:20 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:20.973939 | orchestrator | 2026-01-05 00:45:20 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:20.975264 | orchestrator | 2026-01-05 00:45:20.975320 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-05 00:45:20.975335 | orchestrator | 2026-01-05 00:45:20.975347 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-05 00:45:20.975359 | orchestrator | Monday 05 January 2026 00:45:05 +0000 (0:00:00.732) 0:00:00.732 ******** 2026-01-05 00:45:20.975371 | orchestrator | changed: [testbed-manager] 2026-01-05 00:45:20.975384 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:45:20.975430 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:45:20.975441 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:45:20.975452 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:45:20.975463 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:45:20.975474 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:45:20.975485 | orchestrator | 2026-01-05 00:45:20.975496 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-05 00:45:20.975509 | orchestrator | Monday 05 January 2026 00:45:09 +0000 (0:00:03.566) 0:00:04.299 ******** 2026-01-05 00:45:20.975521 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-05 00:45:20.975532 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-05 00:45:20.975544 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-05 00:45:20.975554 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-05 00:45:20.975565 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-05 00:45:20.975576 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-05 00:45:20.975587 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-05 00:45:20.975598 | orchestrator | 2026-01-05 00:45:20.975609 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-05 00:45:20.975629 | orchestrator | Monday 05 January 2026 00:45:11 +0000 (0:00:01.991) 0:00:06.290 ******** 2026-01-05 00:45:20.975671 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:45:10.101638', 'end': '2026-01-05 00:45:10.110687', 'delta': '0:00:00.009049', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:45:20.975687 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:45:10.010812', 'end': '2026-01-05 00:45:10.019711', 'delta': '0:00:00.008899', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:45:20.975698 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:45:10.560912', 'end': '2026-01-05 00:45:10.576035', 'delta': '0:00:00.015123', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:45:20.975779 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:45:10.845473', 'end': '2026-01-05 00:45:10.854024', 'delta': '0:00:00.008551', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:45:20.975794 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:45:10.258361', 'end': '2026-01-05 00:45:10.267159', 'delta': '0:00:00.008798', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:45:20.976129 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:45:10.407043', 'end': '2026-01-05 00:45:10.416414', 'delta': '0:00:00.009371', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:45:20.976146 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:45:10.488764', 'end': '2026-01-05 00:45:10.498197', 'delta': '0:00:00.009433', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:45:20.976159 | orchestrator | 2026-01-05 00:45:20.976172 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-05 00:45:20.976185 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:02.071) 0:00:08.362 ******** 2026-01-05 00:45:20.976196 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-05 00:45:20.976207 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-05 00:45:20.976218 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-05 00:45:20.976229 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-05 00:45:20.976240 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-05 00:45:20.976251 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-05 00:45:20.976261 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-05 00:45:20.976272 | orchestrator | 2026-01-05 00:45:20.976283 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-05 00:45:20.976294 | orchestrator | Monday 05 January 2026 00:45:16 +0000 (0:00:03.084) 0:00:11.447 ******** 2026-01-05 00:45:20.976305 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-05 00:45:20.976316 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-05 00:45:20.976327 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-05 00:45:20.976338 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-05 00:45:20.976349 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-05 00:45:20.976360 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-05 00:45:20.976371 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-05 00:45:20.976382 | orchestrator | 2026-01-05 00:45:20.976419 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:45:20.976440 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:45:20.976608 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:45:20.976633 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:45:20.976644 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:45:20.976662 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:45:20.976673 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:45:20.976683 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:45:20.976694 | orchestrator | 2026-01-05 00:45:20.976705 | orchestrator | 2026-01-05 00:45:20.976716 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:45:20.976727 | orchestrator | Monday 05 January 2026 00:45:20 +0000 (0:00:03.689) 0:00:15.136 ******** 2026-01-05 00:45:20.976738 | orchestrator | =============================================================================== 2026-01-05 00:45:20.976749 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.69s 2026-01-05 00:45:20.976760 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.57s 2026-01-05 00:45:20.976771 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.08s 2026-01-05 00:45:20.976781 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.07s 2026-01-05 00:45:20.976792 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.99s 2026-01-05 00:45:20.976803 | orchestrator | 2026-01-05 00:45:20 | INFO  | Task 5a3b6001-3291-4758-8d03-31b083800efd is in state SUCCESS 2026-01-05 00:45:20.976820 | orchestrator | 2026-01-05 00:45:20 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:20.981533 | orchestrator | 2026-01-05 00:45:20 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:20.981590 | orchestrator | 2026-01-05 00:45:20 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:20.982071 | orchestrator | 2026-01-05 00:45:20 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:20.982150 | orchestrator | 2026-01-05 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:24.059915 | orchestrator | 2026-01-05 00:45:24 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:24.060031 | orchestrator | 2026-01-05 00:45:24 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:24.060046 | orchestrator | 2026-01-05 00:45:24 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:24.060058 | orchestrator | 2026-01-05 00:45:24 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:24.060069 | orchestrator | 2026-01-05 00:45:24 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:24.060081 | orchestrator | 2026-01-05 00:45:24 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:24.060092 | orchestrator | 2026-01-05 00:45:24 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:24.060103 | orchestrator | 2026-01-05 00:45:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:27.155742 | orchestrator | 2026-01-05 00:45:27 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:27.160514 | orchestrator | 2026-01-05 00:45:27 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:27.160615 | orchestrator | 2026-01-05 00:45:27 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:27.174521 | orchestrator | 2026-01-05 00:45:27 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:27.174606 | orchestrator | 2026-01-05 00:45:27 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:27.174621 | orchestrator | 2026-01-05 00:45:27 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:27.174632 | orchestrator | 2026-01-05 00:45:27 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:27.174644 | orchestrator | 2026-01-05 00:45:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:30.282531 | orchestrator | 2026-01-05 00:45:30 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:30.282613 | orchestrator | 2026-01-05 00:45:30 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:30.282621 | orchestrator | 2026-01-05 00:45:30 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:30.282625 | orchestrator | 2026-01-05 00:45:30 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:30.282648 | orchestrator | 2026-01-05 00:45:30 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:30.282653 | orchestrator | 2026-01-05 00:45:30 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:30.282658 | orchestrator | 2026-01-05 00:45:30 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:30.282663 | orchestrator | 2026-01-05 00:45:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:33.376253 | orchestrator | 2026-01-05 00:45:33 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:33.379070 | orchestrator | 2026-01-05 00:45:33 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:33.381200 | orchestrator | 2026-01-05 00:45:33 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:33.382345 | orchestrator | 2026-01-05 00:45:33 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:33.383809 | orchestrator | 2026-01-05 00:45:33 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:33.385548 | orchestrator | 2026-01-05 00:45:33 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:33.386239 | orchestrator | 2026-01-05 00:45:33 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:33.386277 | orchestrator | 2026-01-05 00:45:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:36.444782 | orchestrator | 2026-01-05 00:45:36 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:36.445151 | orchestrator | 2026-01-05 00:45:36 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:36.445876 | orchestrator | 2026-01-05 00:45:36 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:36.446944 | orchestrator | 2026-01-05 00:45:36 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:36.449787 | orchestrator | 2026-01-05 00:45:36 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:36.450811 | orchestrator | 2026-01-05 00:45:36 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:36.451663 | orchestrator | 2026-01-05 00:45:36 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:36.451675 | orchestrator | 2026-01-05 00:45:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:39.526284 | orchestrator | 2026-01-05 00:45:39 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:39.531395 | orchestrator | 2026-01-05 00:45:39 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:39.534618 | orchestrator | 2026-01-05 00:45:39 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:39.538214 | orchestrator | 2026-01-05 00:45:39 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:39.540834 | orchestrator | 2026-01-05 00:45:39 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:39.543677 | orchestrator | 2026-01-05 00:45:39 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:39.545630 | orchestrator | 2026-01-05 00:45:39 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:39.545843 | orchestrator | 2026-01-05 00:45:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:42.651859 | orchestrator | 2026-01-05 00:45:42 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:42.651968 | orchestrator | 2026-01-05 00:45:42 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:42.660221 | orchestrator | 2026-01-05 00:45:42 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:42.667753 | orchestrator | 2026-01-05 00:45:42 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:42.667827 | orchestrator | 2026-01-05 00:45:42 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:42.667841 | orchestrator | 2026-01-05 00:45:42 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:42.667852 | orchestrator | 2026-01-05 00:45:42 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:42.667891 | orchestrator | 2026-01-05 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:45.729994 | orchestrator | 2026-01-05 00:45:45 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:45.730145 | orchestrator | 2026-01-05 00:45:45 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:45.730152 | orchestrator | 2026-01-05 00:45:45 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:45.730157 | orchestrator | 2026-01-05 00:45:45 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:45.730161 | orchestrator | 2026-01-05 00:45:45 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:45.730165 | orchestrator | 2026-01-05 00:45:45 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:45.730169 | orchestrator | 2026-01-05 00:45:45 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state STARTED 2026-01-05 00:45:45.730174 | orchestrator | 2026-01-05 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:48.841583 | orchestrator | 2026-01-05 00:45:48 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:48.841708 | orchestrator | 2026-01-05 00:45:48 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:48.841760 | orchestrator | 2026-01-05 00:45:48 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:48.851226 | orchestrator | 2026-01-05 00:45:48 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:48.928569 | orchestrator | 2026-01-05 00:45:48 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:48.928680 | orchestrator | 2026-01-05 00:45:48 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:48.928694 | orchestrator | 2026-01-05 00:45:48 | INFO  | Task 0bbc2ca3-de2f-4dc9-8616-1491f1742cb9 is in state SUCCESS 2026-01-05 00:45:48.928707 | orchestrator | 2026-01-05 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:51.924012 | orchestrator | 2026-01-05 00:45:51 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:51.924689 | orchestrator | 2026-01-05 00:45:51 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:51.926611 | orchestrator | 2026-01-05 00:45:51 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:51.927986 | orchestrator | 2026-01-05 00:45:51 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:51.929188 | orchestrator | 2026-01-05 00:45:51 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:51.930706 | orchestrator | 2026-01-05 00:45:51 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:51.930733 | orchestrator | 2026-01-05 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:54.994825 | orchestrator | 2026-01-05 00:45:54 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:54.996679 | orchestrator | 2026-01-05 00:45:54 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:54.999906 | orchestrator | 2026-01-05 00:45:54 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:55.003440 | orchestrator | 2026-01-05 00:45:55 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:55.004620 | orchestrator | 2026-01-05 00:45:55 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:55.027422 | orchestrator | 2026-01-05 00:45:55 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:55.027583 | orchestrator | 2026-01-05 00:45:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:45:58.184607 | orchestrator | 2026-01-05 00:45:58 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:45:58.191542 | orchestrator | 2026-01-05 00:45:58 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:45:58.195678 | orchestrator | 2026-01-05 00:45:58 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:45:58.198246 | orchestrator | 2026-01-05 00:45:58 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:45:58.200756 | orchestrator | 2026-01-05 00:45:58 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state STARTED 2026-01-05 00:45:58.203066 | orchestrator | 2026-01-05 00:45:58 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:45:58.203133 | orchestrator | 2026-01-05 00:45:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:01.294771 | orchestrator | 2026-01-05 00:46:01 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:01.295575 | orchestrator | 2026-01-05 00:46:01 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:01.296242 | orchestrator | 2026-01-05 00:46:01 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:01.298411 | orchestrator | 2026-01-05 00:46:01 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:01.298955 | orchestrator | 2026-01-05 00:46:01 | INFO  | Task 112727d4-b92d-42f3-bd49-49f19d53ea1f is in state SUCCESS 2026-01-05 00:46:01.302128 | orchestrator | 2026-01-05 00:46:01 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:01.302393 | orchestrator | 2026-01-05 00:46:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:04.334732 | orchestrator | 2026-01-05 00:46:04 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:04.334850 | orchestrator | 2026-01-05 00:46:04 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:04.336985 | orchestrator | 2026-01-05 00:46:04 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:04.338160 | orchestrator | 2026-01-05 00:46:04 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:04.338642 | orchestrator | 2026-01-05 00:46:04 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:04.338691 | orchestrator | 2026-01-05 00:46:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:07.397274 | orchestrator | 2026-01-05 00:46:07 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:07.398648 | orchestrator | 2026-01-05 00:46:07 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:07.401517 | orchestrator | 2026-01-05 00:46:07 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:07.401565 | orchestrator | 2026-01-05 00:46:07 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:07.403318 | orchestrator | 2026-01-05 00:46:07 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:07.403465 | orchestrator | 2026-01-05 00:46:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:10.468758 | orchestrator | 2026-01-05 00:46:10 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:10.474419 | orchestrator | 2026-01-05 00:46:10 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:10.477150 | orchestrator | 2026-01-05 00:46:10 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:10.479284 | orchestrator | 2026-01-05 00:46:10 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:10.481158 | orchestrator | 2026-01-05 00:46:10 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:10.481393 | orchestrator | 2026-01-05 00:46:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:13.526997 | orchestrator | 2026-01-05 00:46:13 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:13.527189 | orchestrator | 2026-01-05 00:46:13 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:13.527256 | orchestrator | 2026-01-05 00:46:13 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:13.530127 | orchestrator | 2026-01-05 00:46:13 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:13.531057 | orchestrator | 2026-01-05 00:46:13 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:13.531098 | orchestrator | 2026-01-05 00:46:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:16.585679 | orchestrator | 2026-01-05 00:46:16 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:16.587970 | orchestrator | 2026-01-05 00:46:16 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:16.589132 | orchestrator | 2026-01-05 00:46:16 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:16.590516 | orchestrator | 2026-01-05 00:46:16 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:16.591621 | orchestrator | 2026-01-05 00:46:16 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:16.591659 | orchestrator | 2026-01-05 00:46:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:19.646324 | orchestrator | 2026-01-05 00:46:19 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:19.647646 | orchestrator | 2026-01-05 00:46:19 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:19.648072 | orchestrator | 2026-01-05 00:46:19 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:19.649882 | orchestrator | 2026-01-05 00:46:19 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:19.650556 | orchestrator | 2026-01-05 00:46:19 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:19.650605 | orchestrator | 2026-01-05 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:22.751662 | orchestrator | 2026-01-05 00:46:22 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:22.752918 | orchestrator | 2026-01-05 00:46:22 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:22.753998 | orchestrator | 2026-01-05 00:46:22 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:22.756575 | orchestrator | 2026-01-05 00:46:22 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:22.758339 | orchestrator | 2026-01-05 00:46:22 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:22.758397 | orchestrator | 2026-01-05 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:25.810649 | orchestrator | 2026-01-05 00:46:25 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:25.810739 | orchestrator | 2026-01-05 00:46:25 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:25.810748 | orchestrator | 2026-01-05 00:46:25 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:25.810755 | orchestrator | 2026-01-05 00:46:25 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:25.810761 | orchestrator | 2026-01-05 00:46:25 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:25.810767 | orchestrator | 2026-01-05 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:28.841943 | orchestrator | 2026-01-05 00:46:28 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:28.844629 | orchestrator | 2026-01-05 00:46:28 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:28.844775 | orchestrator | 2026-01-05 00:46:28 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:28.845975 | orchestrator | 2026-01-05 00:46:28 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:28.847106 | orchestrator | 2026-01-05 00:46:28 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:28.847836 | orchestrator | 2026-01-05 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:31.883210 | orchestrator | 2026-01-05 00:46:31 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:31.883607 | orchestrator | 2026-01-05 00:46:31 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:31.884447 | orchestrator | 2026-01-05 00:46:31 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:31.885383 | orchestrator | 2026-01-05 00:46:31 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:31.885929 | orchestrator | 2026-01-05 00:46:31 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:31.885957 | orchestrator | 2026-01-05 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:34.922521 | orchestrator | 2026-01-05 00:46:34 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:34.923740 | orchestrator | 2026-01-05 00:46:34 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state STARTED 2026-01-05 00:46:34.924909 | orchestrator | 2026-01-05 00:46:34 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:34.925877 | orchestrator | 2026-01-05 00:46:34 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:34.926880 | orchestrator | 2026-01-05 00:46:34 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state STARTED 2026-01-05 00:46:34.926933 | orchestrator | 2026-01-05 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:37.965047 | orchestrator | 2026-01-05 00:46:37 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:37.965179 | orchestrator | 2026-01-05 00:46:37 | INFO  | Task 6f41383b-dffa-4a4c-8a9e-28fb6a1980be is in state SUCCESS 2026-01-05 00:46:37.966627 | orchestrator | 2026-01-05 00:46:37.966692 | orchestrator | 2026-01-05 00:46:37.966711 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-05 00:46:37.966729 | orchestrator | 2026-01-05 00:46:37.966747 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-05 00:46:37.966759 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:01.098) 0:00:01.098 ******** 2026-01-05 00:46:37.966769 | orchestrator | ok: [testbed-manager] => { 2026-01-05 00:46:37.966781 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-05 00:46:37.966792 | orchestrator | } 2026-01-05 00:46:37.966803 | orchestrator | 2026-01-05 00:46:37.966813 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-05 00:46:37.966822 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:00.251) 0:00:01.350 ******** 2026-01-05 00:46:37.966832 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.966843 | orchestrator | 2026-01-05 00:46:37.966853 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-05 00:46:37.966862 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:01.835) 0:00:03.185 ******** 2026-01-05 00:46:37.966872 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-05 00:46:37.966882 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-05 00:46:37.966913 | orchestrator | 2026-01-05 00:46:37.966924 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-05 00:46:37.966933 | orchestrator | Monday 05 January 2026 00:45:10 +0000 (0:00:02.224) 0:00:05.410 ******** 2026-01-05 00:46:37.966944 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.966954 | orchestrator | 2026-01-05 00:46:37.966964 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-05 00:46:37.966973 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:03.112) 0:00:08.522 ******** 2026-01-05 00:46:37.966983 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.966992 | orchestrator | 2026-01-05 00:46:37.967002 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-05 00:46:37.967011 | orchestrator | Monday 05 January 2026 00:45:15 +0000 (0:00:01.950) 0:00:10.473 ******** 2026-01-05 00:46:37.967021 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-05 00:46:37.967030 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.967039 | orchestrator | 2026-01-05 00:46:37.967049 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-05 00:46:37.967058 | orchestrator | Monday 05 January 2026 00:45:43 +0000 (0:00:27.343) 0:00:37.816 ******** 2026-01-05 00:46:37.967068 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.967077 | orchestrator | 2026-01-05 00:46:37.967087 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:46:37.967097 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.967108 | orchestrator | 2026-01-05 00:46:37.967118 | orchestrator | 2026-01-05 00:46:37.967127 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:46:37.967137 | orchestrator | Monday 05 January 2026 00:45:47 +0000 (0:00:04.368) 0:00:42.184 ******** 2026-01-05 00:46:37.967146 | orchestrator | =============================================================================== 2026-01-05 00:46:37.967155 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.34s 2026-01-05 00:46:37.967165 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.37s 2026-01-05 00:46:37.967174 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.11s 2026-01-05 00:46:37.967184 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.22s 2026-01-05 00:46:37.967193 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.95s 2026-01-05 00:46:37.967205 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.84s 2026-01-05 00:46:37.967216 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.25s 2026-01-05 00:46:37.967227 | orchestrator | 2026-01-05 00:46:37.967238 | orchestrator | 2026-01-05 00:46:37.967250 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-05 00:46:37.967261 | orchestrator | 2026-01-05 00:46:37.967272 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-05 00:46:37.967284 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:01.050) 0:00:01.050 ******** 2026-01-05 00:46:37.967295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-05 00:46:37.967308 | orchestrator | 2026-01-05 00:46:37.967351 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-05 00:46:37.967363 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:00.360) 0:00:01.410 ******** 2026-01-05 00:46:37.967374 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-05 00:46:37.967386 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-05 00:46:37.967397 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-05 00:46:37.967409 | orchestrator | 2026-01-05 00:46:37.967428 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-05 00:46:37.967440 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:01.564) 0:00:02.975 ******** 2026-01-05 00:46:37.967452 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.967463 | orchestrator | 2026-01-05 00:46:37.967474 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-05 00:46:37.967486 | orchestrator | Monday 05 January 2026 00:45:11 +0000 (0:00:03.192) 0:00:06.168 ******** 2026-01-05 00:46:37.967511 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-05 00:46:37.967523 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.967535 | orchestrator | 2026-01-05 00:46:37.967546 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-05 00:46:37.967558 | orchestrator | Monday 05 January 2026 00:45:49 +0000 (0:00:37.831) 0:00:44.000 ******** 2026-01-05 00:46:37.967569 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.967579 | orchestrator | 2026-01-05 00:46:37.967589 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-05 00:46:37.967598 | orchestrator | Monday 05 January 2026 00:45:51 +0000 (0:00:02.143) 0:00:46.143 ******** 2026-01-05 00:46:37.967608 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.967617 | orchestrator | 2026-01-05 00:46:37.967627 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-05 00:46:37.967636 | orchestrator | Monday 05 January 2026 00:45:52 +0000 (0:00:01.309) 0:00:47.453 ******** 2026-01-05 00:46:37.967646 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.967655 | orchestrator | 2026-01-05 00:46:37.967665 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-05 00:46:37.967674 | orchestrator | Monday 05 January 2026 00:45:56 +0000 (0:00:04.129) 0:00:51.582 ******** 2026-01-05 00:46:37.967684 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.967693 | orchestrator | 2026-01-05 00:46:37.967703 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-05 00:46:37.967712 | orchestrator | Monday 05 January 2026 00:45:57 +0000 (0:00:00.944) 0:00:52.526 ******** 2026-01-05 00:46:37.967722 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.967746 | orchestrator | 2026-01-05 00:46:37.967756 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-05 00:46:37.967766 | orchestrator | Monday 05 January 2026 00:45:58 +0000 (0:00:00.787) 0:00:53.313 ******** 2026-01-05 00:46:37.967880 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.967894 | orchestrator | 2026-01-05 00:46:37.967903 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:46:37.967913 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.967923 | orchestrator | 2026-01-05 00:46:37.967932 | orchestrator | 2026-01-05 00:46:37.967942 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:46:37.967951 | orchestrator | Monday 05 January 2026 00:45:59 +0000 (0:00:00.523) 0:00:53.837 ******** 2026-01-05 00:46:37.967961 | orchestrator | =============================================================================== 2026-01-05 00:46:37.967971 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.83s 2026-01-05 00:46:37.967980 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.13s 2026-01-05 00:46:37.967989 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.19s 2026-01-05 00:46:37.967999 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.14s 2026-01-05 00:46:37.968009 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.56s 2026-01-05 00:46:37.968018 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.31s 2026-01-05 00:46:37.968027 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.94s 2026-01-05 00:46:37.968044 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.79s 2026-01-05 00:46:37.968054 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.53s 2026-01-05 00:46:37.968063 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.36s 2026-01-05 00:46:37.968073 | orchestrator | 2026-01-05 00:46:37.968082 | orchestrator | 2026-01-05 00:46:37.968092 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-05 00:46:37.968101 | orchestrator | 2026-01-05 00:46:37.968111 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-05 00:46:37.968121 | orchestrator | Monday 05 January 2026 00:45:25 +0000 (0:00:00.333) 0:00:00.333 ******** 2026-01-05 00:46:37.968130 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.968141 | orchestrator | 2026-01-05 00:46:37.968158 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-05 00:46:37.968173 | orchestrator | Monday 05 January 2026 00:45:27 +0000 (0:00:01.656) 0:00:01.990 ******** 2026-01-05 00:46:37.968199 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-05 00:46:37.968214 | orchestrator | 2026-01-05 00:46:37.968230 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-05 00:46:37.968245 | orchestrator | Monday 05 January 2026 00:45:28 +0000 (0:00:00.802) 0:00:02.792 ******** 2026-01-05 00:46:37.968260 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.968275 | orchestrator | 2026-01-05 00:46:37.968297 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-05 00:46:37.968313 | orchestrator | Monday 05 January 2026 00:45:29 +0000 (0:00:01.894) 0:00:04.686 ******** 2026-01-05 00:46:37.968353 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-05 00:46:37.968370 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.968385 | orchestrator | 2026-01-05 00:46:37.968399 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-05 00:46:37.968414 | orchestrator | Monday 05 January 2026 00:46:31 +0000 (0:01:01.572) 0:01:06.259 ******** 2026-01-05 00:46:37.968429 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.968445 | orchestrator | 2026-01-05 00:46:37.968461 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:46:37.968476 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.968492 | orchestrator | 2026-01-05 00:46:37.968508 | orchestrator | 2026-01-05 00:46:37.968526 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:46:37.968558 | orchestrator | Monday 05 January 2026 00:46:34 +0000 (0:00:03.289) 0:01:09.549 ******** 2026-01-05 00:46:37.968577 | orchestrator | =============================================================================== 2026-01-05 00:46:37.968596 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 61.57s 2026-01-05 00:46:37.968614 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.29s 2026-01-05 00:46:37.968633 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.89s 2026-01-05 00:46:37.968652 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.66s 2026-01-05 00:46:37.968667 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.80s 2026-01-05 00:46:37.968683 | orchestrator | 2026-01-05 00:46:37 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:37.968701 | orchestrator | 2026-01-05 00:46:37 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:37.969302 | orchestrator | 2026-01-05 00:46:37 | INFO  | Task 110d85a1-57c5-4133-a0fd-0db9168f25ee is in state SUCCESS 2026-01-05 00:46:37.969421 | orchestrator | 2026-01-05 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:37.969892 | orchestrator | 2026-01-05 00:46:37.969958 | orchestrator | 2026-01-05 00:46:37.970002 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:46:37.970088 | orchestrator | 2026-01-05 00:46:37.970107 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:46:37.970122 | orchestrator | Monday 05 January 2026 00:45:03 +0000 (0:00:00.420) 0:00:00.420 ******** 2026-01-05 00:46:37.970139 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-05 00:46:37.970155 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-05 00:46:37.970171 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-05 00:46:37.970187 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-05 00:46:37.970203 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-05 00:46:37.970219 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-05 00:46:37.970235 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-05 00:46:37.970251 | orchestrator | 2026-01-05 00:46:37.970268 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-05 00:46:37.970284 | orchestrator | 2026-01-05 00:46:37.970300 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-05 00:46:37.970317 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:02.295) 0:00:02.715 ******** 2026-01-05 00:46:37.970421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:46:37.970450 | orchestrator | 2026-01-05 00:46:37.970469 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-05 00:46:37.970486 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:02.060) 0:00:04.776 ******** 2026-01-05 00:46:37.970503 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.970522 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:37.970543 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:37.970561 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:46:37.970578 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:46:37.970598 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:37.970616 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:37.970635 | orchestrator | 2026-01-05 00:46:37.970656 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-05 00:46:37.970676 | orchestrator | Monday 05 January 2026 00:45:10 +0000 (0:00:01.711) 0:00:06.487 ******** 2026-01-05 00:46:37.970696 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.970718 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:37.970739 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:37.970759 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:37.970781 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:37.970798 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:46:37.970816 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:46:37.970835 | orchestrator | 2026-01-05 00:46:37.970852 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-05 00:46:37.970870 | orchestrator | Monday 05 January 2026 00:45:14 +0000 (0:00:04.098) 0:00:10.590 ******** 2026-01-05 00:46:37.970887 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:46:37.970905 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:37.970922 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:37.970940 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:37.970969 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.970988 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:46:37.971005 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:46:37.971022 | orchestrator | 2026-01-05 00:46:37.971040 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-05 00:46:37.971058 | orchestrator | Monday 05 January 2026 00:45:16 +0000 (0:00:02.601) 0:00:13.191 ******** 2026-01-05 00:46:37.971074 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:46:37.971108 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:37.971126 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:37.971144 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:37.971161 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:46:37.971178 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:46:37.971196 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.971213 | orchestrator | 2026-01-05 00:46:37.971230 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-05 00:46:37.971247 | orchestrator | Monday 05 January 2026 00:45:32 +0000 (0:00:15.587) 0:00:28.778 ******** 2026-01-05 00:46:37.971262 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:37.971281 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:46:37.971298 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:37.971315 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:46:37.971355 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:46:37.971366 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:37.971375 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.971385 | orchestrator | 2026-01-05 00:46:37.971395 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-05 00:46:37.971405 | orchestrator | Monday 05 January 2026 00:46:16 +0000 (0:00:43.830) 0:01:12.609 ******** 2026-01-05 00:46:37.971416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:46:37.971429 | orchestrator | 2026-01-05 00:46:37.971439 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-05 00:46:37.971448 | orchestrator | Monday 05 January 2026 00:46:17 +0000 (0:00:01.354) 0:01:13.963 ******** 2026-01-05 00:46:37.971458 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-05 00:46:37.971469 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-05 00:46:37.971479 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-05 00:46:37.971489 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-05 00:46:37.971519 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-05 00:46:37.971529 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-05 00:46:37.971539 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-05 00:46:37.971549 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-05 00:46:37.971558 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-05 00:46:37.971568 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-05 00:46:37.971578 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-05 00:46:37.971587 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-05 00:46:37.971597 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-05 00:46:37.971607 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-05 00:46:37.971616 | orchestrator | 2026-01-05 00:46:37.971626 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-05 00:46:37.971637 | orchestrator | Monday 05 January 2026 00:46:22 +0000 (0:00:05.353) 0:01:19.317 ******** 2026-01-05 00:46:37.971646 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.971656 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:37.971666 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:37.971675 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:37.971685 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:37.971695 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:46:37.971704 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:46:37.971714 | orchestrator | 2026-01-05 00:46:37.971723 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-05 00:46:37.971733 | orchestrator | Monday 05 January 2026 00:46:24 +0000 (0:00:01.125) 0:01:20.443 ******** 2026-01-05 00:46:37.971743 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.971762 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:37.971772 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:37.971781 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:37.971791 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:46:37.971800 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:46:37.971810 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:46:37.971819 | orchestrator | 2026-01-05 00:46:37.971829 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-05 00:46:37.971839 | orchestrator | Monday 05 January 2026 00:46:25 +0000 (0:00:01.298) 0:01:21.741 ******** 2026-01-05 00:46:37.971848 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.971858 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:37.971868 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:37.971877 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:37.971887 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:46:37.971897 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:46:37.971907 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:37.971916 | orchestrator | 2026-01-05 00:46:37.971926 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-05 00:46:37.971936 | orchestrator | Monday 05 January 2026 00:46:27 +0000 (0:00:01.897) 0:01:23.639 ******** 2026-01-05 00:46:37.971945 | orchestrator | ok: [testbed-manager] 2026-01-05 00:46:37.971955 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:37.971965 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:37.971974 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:37.971985 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:37.972001 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:46:37.972016 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:46:37.972030 | orchestrator | 2026-01-05 00:46:37.972047 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-05 00:46:37.972075 | orchestrator | Monday 05 January 2026 00:46:28 +0000 (0:00:01.637) 0:01:25.276 ******** 2026-01-05 00:46:37.972092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-05 00:46:37.972110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:46:37.972122 | orchestrator | 2026-01-05 00:46:37.972131 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-05 00:46:37.972141 | orchestrator | Monday 05 January 2026 00:46:30 +0000 (0:00:01.304) 0:01:26.581 ******** 2026-01-05 00:46:37.972151 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.972161 | orchestrator | 2026-01-05 00:46:37.972170 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-05 00:46:37.972180 | orchestrator | Monday 05 January 2026 00:46:32 +0000 (0:00:02.307) 0:01:28.888 ******** 2026-01-05 00:46:37.972189 | orchestrator | changed: [testbed-manager] 2026-01-05 00:46:37.972199 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:37.972209 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:37.972218 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:46:37.972228 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:37.972237 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:46:37.972247 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:46:37.972256 | orchestrator | 2026-01-05 00:46:37.972266 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:46:37.972276 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.972288 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.972298 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.972315 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.972356 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.972371 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.972381 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:46:37.972390 | orchestrator | 2026-01-05 00:46:37.972400 | orchestrator | 2026-01-05 00:46:37.972410 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:46:37.972420 | orchestrator | Monday 05 January 2026 00:46:35 +0000 (0:00:02.898) 0:01:31.787 ******** 2026-01-05 00:46:37.972429 | orchestrator | =============================================================================== 2026-01-05 00:46:37.972439 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 43.83s 2026-01-05 00:46:37.972449 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.59s 2026-01-05 00:46:37.972459 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.35s 2026-01-05 00:46:37.972468 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.10s 2026-01-05 00:46:37.972478 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.90s 2026-01-05 00:46:37.972488 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.60s 2026-01-05 00:46:37.972497 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.31s 2026-01-05 00:46:37.972507 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.30s 2026-01-05 00:46:37.972517 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.06s 2026-01-05 00:46:37.972527 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.90s 2026-01-05 00:46:37.972536 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.71s 2026-01-05 00:46:37.972546 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.64s 2026-01-05 00:46:37.972556 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.35s 2026-01-05 00:46:37.972565 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.30s 2026-01-05 00:46:37.972576 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.30s 2026-01-05 00:46:37.972585 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.13s 2026-01-05 00:46:41.003358 | orchestrator | 2026-01-05 00:46:41 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:41.005655 | orchestrator | 2026-01-05 00:46:41 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:41.006763 | orchestrator | 2026-01-05 00:46:41 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:41.006832 | orchestrator | 2026-01-05 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:44.091770 | orchestrator | 2026-01-05 00:46:44 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:44.093154 | orchestrator | 2026-01-05 00:46:44 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:44.094672 | orchestrator | 2026-01-05 00:46:44 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:44.094716 | orchestrator | 2026-01-05 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:47.139832 | orchestrator | 2026-01-05 00:46:47 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:47.142846 | orchestrator | 2026-01-05 00:46:47 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:47.147248 | orchestrator | 2026-01-05 00:46:47 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:47.147508 | orchestrator | 2026-01-05 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:50.188579 | orchestrator | 2026-01-05 00:46:50 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:50.191840 | orchestrator | 2026-01-05 00:46:50 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:50.194585 | orchestrator | 2026-01-05 00:46:50 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:50.194694 | orchestrator | 2026-01-05 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:53.243529 | orchestrator | 2026-01-05 00:46:53 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:53.247184 | orchestrator | 2026-01-05 00:46:53 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:53.247302 | orchestrator | 2026-01-05 00:46:53 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:53.247336 | orchestrator | 2026-01-05 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:56.288878 | orchestrator | 2026-01-05 00:46:56 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:56.289779 | orchestrator | 2026-01-05 00:46:56 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:56.291039 | orchestrator | 2026-01-05 00:46:56 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:56.291072 | orchestrator | 2026-01-05 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:46:59.338974 | orchestrator | 2026-01-05 00:46:59 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:46:59.344659 | orchestrator | 2026-01-05 00:46:59 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:46:59.344712 | orchestrator | 2026-01-05 00:46:59 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:46:59.344720 | orchestrator | 2026-01-05 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:02.396232 | orchestrator | 2026-01-05 00:47:02 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:02.396534 | orchestrator | 2026-01-05 00:47:02 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:02.401254 | orchestrator | 2026-01-05 00:47:02 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:02.401403 | orchestrator | 2026-01-05 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:05.439590 | orchestrator | 2026-01-05 00:47:05 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:05.440014 | orchestrator | 2026-01-05 00:47:05 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:05.441362 | orchestrator | 2026-01-05 00:47:05 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:05.441403 | orchestrator | 2026-01-05 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:08.485111 | orchestrator | 2026-01-05 00:47:08 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:08.486646 | orchestrator | 2026-01-05 00:47:08 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:08.487510 | orchestrator | 2026-01-05 00:47:08 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:08.487853 | orchestrator | 2026-01-05 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:11.532382 | orchestrator | 2026-01-05 00:47:11 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:11.536210 | orchestrator | 2026-01-05 00:47:11 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:11.537904 | orchestrator | 2026-01-05 00:47:11 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:11.537986 | orchestrator | 2026-01-05 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:14.590489 | orchestrator | 2026-01-05 00:47:14 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:14.595432 | orchestrator | 2026-01-05 00:47:14 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:14.596841 | orchestrator | 2026-01-05 00:47:14 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:14.597714 | orchestrator | 2026-01-05 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:17.633691 | orchestrator | 2026-01-05 00:47:17 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:17.635702 | orchestrator | 2026-01-05 00:47:17 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:17.637845 | orchestrator | 2026-01-05 00:47:17 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:17.638509 | orchestrator | 2026-01-05 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:20.680072 | orchestrator | 2026-01-05 00:47:20 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:20.680310 | orchestrator | 2026-01-05 00:47:20 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:20.681968 | orchestrator | 2026-01-05 00:47:20 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:20.682002 | orchestrator | 2026-01-05 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:23.716545 | orchestrator | 2026-01-05 00:47:23 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:23.718832 | orchestrator | 2026-01-05 00:47:23 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:23.720385 | orchestrator | 2026-01-05 00:47:23 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:23.720456 | orchestrator | 2026-01-05 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:26.772176 | orchestrator | 2026-01-05 00:47:26 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:26.774156 | orchestrator | 2026-01-05 00:47:26 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:26.775364 | orchestrator | 2026-01-05 00:47:26 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:26.775412 | orchestrator | 2026-01-05 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:29.830930 | orchestrator | 2026-01-05 00:47:29 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:29.832618 | orchestrator | 2026-01-05 00:47:29 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:29.833521 | orchestrator | 2026-01-05 00:47:29 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:29.833760 | orchestrator | 2026-01-05 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:32.886674 | orchestrator | 2026-01-05 00:47:32 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state STARTED 2026-01-05 00:47:32.888126 | orchestrator | 2026-01-05 00:47:32 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:32.890099 | orchestrator | 2026-01-05 00:47:32 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:32.890135 | orchestrator | 2026-01-05 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:35.938852 | orchestrator | 2026-01-05 00:47:35.938955 | orchestrator | 2026-01-05 00:47:35 | INFO  | Task 9e23cc23-af02-4c6d-8358-28b2632d330f is in state SUCCESS 2026-01-05 00:47:35.940722 | orchestrator | 2026-01-05 00:47:35.940787 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-05 00:47:35.940796 | orchestrator | 2026-01-05 00:47:35.940803 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 00:47:35.940810 | orchestrator | Monday 05 January 2026 00:44:56 +0000 (0:00:00.272) 0:00:00.272 ******** 2026-01-05 00:47:35.940824 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:47:35.940832 | orchestrator | 2026-01-05 00:47:35.940838 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-05 00:47:35.940844 | orchestrator | Monday 05 January 2026 00:44:57 +0000 (0:00:01.469) 0:00:01.742 ******** 2026-01-05 00:47:35.940850 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:47:35.940856 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:47:35.940862 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:47:35.940869 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:47:35.940876 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:47:35.940882 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:47:35.940888 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:47:35.940894 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:47:35.940900 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:47:35.940906 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:47:35.940913 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:47:35.940920 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:47:35.940928 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:47:35.940935 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:47:35.940941 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:47:35.940948 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:47:35.940954 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:47:35.940981 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:47:35.941011 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:47:35.941019 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:47:35.941026 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:47:35.941034 | orchestrator | 2026-01-05 00:47:35.941041 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 00:47:35.941049 | orchestrator | Monday 05 January 2026 00:45:01 +0000 (0:00:04.035) 0:00:05.778 ******** 2026-01-05 00:47:35.941058 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:47:35.941066 | orchestrator | 2026-01-05 00:47:35.941074 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-05 00:47:35.941080 | orchestrator | Monday 05 January 2026 00:45:03 +0000 (0:00:01.249) 0:00:07.027 ******** 2026-01-05 00:47:35.941091 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941170 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941225 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941309 | orchestrator | 2026-01-05 00:47:35.941314 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-05 00:47:35.941319 | orchestrator | Monday 05 January 2026 00:45:07 +0000 (0:00:04.127) 0:00:11.154 ******** 2026-01-05 00:47:35.941324 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941329 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941334 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941338 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:47:35.941347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941367 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:47:35.941372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941424 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:47:35.941429 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:47:35.941433 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:35.941438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941452 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:35.941463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941485 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:35.941489 | orchestrator | 2026-01-05 00:47:35.941494 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-05 00:47:35.941498 | orchestrator | Monday 05 January 2026 00:45:09 +0000 (0:00:02.062) 0:00:13.217 ******** 2026-01-05 00:47:35.941503 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941512 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941517 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:47:35.941533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941554 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:47:35.941559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941573 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:47:35.941578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941592 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:47:35.941606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941633 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:35.941638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941654 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:35.941661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:47:35.941669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.941683 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:35.941686 | orchestrator | 2026-01-05 00:47:35.941690 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-05 00:47:35.941694 | orchestrator | Monday 05 January 2026 00:45:12 +0000 (0:00:03.423) 0:00:16.640 ******** 2026-01-05 00:47:35.941698 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:47:35.941701 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:47:35.941705 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:47:35.941709 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:47:35.941712 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:35.941716 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:35.941720 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:35.941723 | orchestrator | 2026-01-05 00:47:35.941727 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-05 00:47:35.941731 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:00.659) 0:00:17.300 ******** 2026-01-05 00:47:35.941734 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:47:35.941738 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:47:35.941742 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:47:35.941745 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:47:35.941749 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:35.941753 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:35.941756 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:35.941760 | orchestrator | 2026-01-05 00:47:35.941764 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-05 00:47:35.941767 | orchestrator | Monday 05 January 2026 00:45:14 +0000 (0:00:01.132) 0:00:18.432 ******** 2026-01-05 00:47:35.941772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941776 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.941830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941837 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941876 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.941880 | orchestrator | 2026-01-05 00:47:35.941883 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-05 00:47:35.941887 | orchestrator | Monday 05 January 2026 00:45:23 +0000 (0:00:09.045) 0:00:27.478 ******** 2026-01-05 00:47:35.941891 | orchestrator | [WARNING]: Skipped 2026-01-05 00:47:35.941896 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-05 00:47:35.941900 | orchestrator | to this access issue: 2026-01-05 00:47:35.941904 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-05 00:47:35.941909 | orchestrator | directory 2026-01-05 00:47:35.941913 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:47:35.941917 | orchestrator | 2026-01-05 00:47:35.941920 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-05 00:47:35.941924 | orchestrator | Monday 05 January 2026 00:45:26 +0000 (0:00:02.569) 0:00:30.048 ******** 2026-01-05 00:47:35.941928 | orchestrator | [WARNING]: Skipped 2026-01-05 00:47:35.941931 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-05 00:47:35.941935 | orchestrator | to this access issue: 2026-01-05 00:47:35.941939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-05 00:47:35.941943 | orchestrator | directory 2026-01-05 00:47:35.941946 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:47:35.941950 | orchestrator | 2026-01-05 00:47:35.941954 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-05 00:47:35.941958 | orchestrator | Monday 05 January 2026 00:45:27 +0000 (0:00:01.691) 0:00:31.740 ******** 2026-01-05 00:47:35.941961 | orchestrator | [WARNING]: Skipped 2026-01-05 00:47:35.941969 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-05 00:47:35.941972 | orchestrator | to this access issue: 2026-01-05 00:47:35.941976 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-05 00:47:35.941980 | orchestrator | directory 2026-01-05 00:47:35.941984 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:47:35.941987 | orchestrator | 2026-01-05 00:47:35.941991 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-05 00:47:35.941995 | orchestrator | Monday 05 January 2026 00:45:29 +0000 (0:00:01.332) 0:00:33.073 ******** 2026-01-05 00:47:35.941999 | orchestrator | [WARNING]: Skipped 2026-01-05 00:47:35.942002 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-05 00:47:35.942006 | orchestrator | to this access issue: 2026-01-05 00:47:35.942010 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-05 00:47:35.942061 | orchestrator | directory 2026-01-05 00:47:35.942066 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:47:35.942070 | orchestrator | 2026-01-05 00:47:35.942074 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-05 00:47:35.942078 | orchestrator | Monday 05 January 2026 00:45:30 +0000 (0:00:00.755) 0:00:33.828 ******** 2026-01-05 00:47:35.942081 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:35.942085 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:35.942089 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:35.942093 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:35.942097 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:35.942100 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:35.942104 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:35.942108 | orchestrator | 2026-01-05 00:47:35.942112 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-05 00:47:35.942115 | orchestrator | Monday 05 January 2026 00:45:33 +0000 (0:00:03.745) 0:00:37.574 ******** 2026-01-05 00:47:35.942119 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:47:35.942123 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:47:35.942127 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:47:35.942131 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:47:35.942137 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:47:35.942143 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:47:35.942149 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:47:35.942155 | orchestrator | 2026-01-05 00:47:35.942161 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-05 00:47:35.942167 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:03.493) 0:00:41.067 ******** 2026-01-05 00:47:35.942173 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:35.942179 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:35.942185 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:35.942190 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:35.942200 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:35.942206 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:35.942212 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:35.942218 | orchestrator | 2026-01-05 00:47:35.942223 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-05 00:47:35.942229 | orchestrator | Monday 05 January 2026 00:45:40 +0000 (0:00:02.763) 0:00:43.830 ******** 2026-01-05 00:47:35.942240 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942252 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.942258 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.942289 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.942312 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942325 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.942351 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942358 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.942370 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.942392 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942404 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:47:35.942417 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942424 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942430 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942437 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942443 | orchestrator | 2026-01-05 00:47:35.942450 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-05 00:47:35.942456 | orchestrator | Monday 05 January 2026 00:45:42 +0000 (0:00:02.913) 0:00:46.744 ******** 2026-01-05 00:47:35.942462 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:47:35.942468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:47:35.942475 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:47:35.942487 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:47:35.942493 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:47:35.942500 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:47:35.942507 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:47:35.942513 | orchestrator | 2026-01-05 00:47:35.942523 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-05 00:47:35.942530 | orchestrator | Monday 05 January 2026 00:45:47 +0000 (0:00:04.953) 0:00:51.698 ******** 2026-01-05 00:47:35.942536 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:47:35.942545 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:47:35.942552 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:47:35.942557 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:47:35.942563 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:47:35.942568 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:47:35.942574 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:47:35.942580 | orchestrator | 2026-01-05 00:47:35.942587 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-05 00:47:35.942591 | orchestrator | Monday 05 January 2026 00:45:51 +0000 (0:00:03.861) 0:00:55.560 ******** 2026-01-05 00:47:35.942595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942604 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942608 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942637 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:47:35.942644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942683 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942738 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:47:35.942750 | orchestrator | 2026-01-05 00:47:35.942754 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-05 00:47:35.942760 | orchestrator | Monday 05 January 2026 00:45:55 +0000 (0:00:04.167) 0:00:59.727 ******** 2026-01-05 00:47:35.942770 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:35.942780 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:35.942791 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:35.942796 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:35.942802 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:35.942807 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:35.942813 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:35.942819 | orchestrator | 2026-01-05 00:47:35.942828 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-05 00:47:35.942834 | orchestrator | Monday 05 January 2026 00:45:58 +0000 (0:00:02.449) 0:01:02.177 ******** 2026-01-05 00:47:35.942840 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:35.942845 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:35.942851 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:35.942856 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:35.942862 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:35.942867 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:35.942872 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:35.942877 | orchestrator | 2026-01-05 00:47:35.942883 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:47:35.942889 | orchestrator | Monday 05 January 2026 00:46:00 +0000 (0:00:01.733) 0:01:03.910 ******** 2026-01-05 00:47:35.942895 | orchestrator | 2026-01-05 00:47:35.942901 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:47:35.942907 | orchestrator | Monday 05 January 2026 00:46:00 +0000 (0:00:00.071) 0:01:03.981 ******** 2026-01-05 00:47:35.942914 | orchestrator | 2026-01-05 00:47:35.942920 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:47:35.942926 | orchestrator | Monday 05 January 2026 00:46:00 +0000 (0:00:00.064) 0:01:04.046 ******** 2026-01-05 00:47:35.942932 | orchestrator | 2026-01-05 00:47:35.942938 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:47:35.942944 | orchestrator | Monday 05 January 2026 00:46:00 +0000 (0:00:00.250) 0:01:04.296 ******** 2026-01-05 00:47:35.942952 | orchestrator | 2026-01-05 00:47:35.942956 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:47:35.942960 | orchestrator | Monday 05 January 2026 00:46:00 +0000 (0:00:00.072) 0:01:04.369 ******** 2026-01-05 00:47:35.942970 | orchestrator | 2026-01-05 00:47:35.942974 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:47:35.942978 | orchestrator | Monday 05 January 2026 00:46:00 +0000 (0:00:00.070) 0:01:04.439 ******** 2026-01-05 00:47:35.942981 | orchestrator | 2026-01-05 00:47:35.942986 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:47:35.942990 | orchestrator | Monday 05 January 2026 00:46:00 +0000 (0:00:00.066) 0:01:04.505 ******** 2026-01-05 00:47:35.942994 | orchestrator | 2026-01-05 00:47:35.942997 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-05 00:47:35.943001 | orchestrator | Monday 05 January 2026 00:46:00 +0000 (0:00:00.090) 0:01:04.596 ******** 2026-01-05 00:47:35.943005 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:35.943009 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:35.943012 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:35.943016 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:35.943020 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:35.943024 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:35.943027 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:35.943031 | orchestrator | 2026-01-05 00:47:35.943036 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-05 00:47:35.943039 | orchestrator | Monday 05 January 2026 00:46:40 +0000 (0:00:39.371) 0:01:43.968 ******** 2026-01-05 00:47:35.943043 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:35.943047 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:35.943051 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:35.943056 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:35.943060 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:35.943064 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:35.943068 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:35.943071 | orchestrator | 2026-01-05 00:47:35.943075 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-05 00:47:35.943079 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:42.279) 0:02:26.247 ******** 2026-01-05 00:47:35.943083 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:47:35.943087 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:47:35.943091 | orchestrator | ok: [testbed-manager] 2026-01-05 00:47:35.943094 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:47:35.943098 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:35.943102 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:35.943109 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:35.943116 | orchestrator | 2026-01-05 00:47:35.943126 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-05 00:47:35.943132 | orchestrator | Monday 05 January 2026 00:47:24 +0000 (0:00:02.167) 0:02:28.415 ******** 2026-01-05 00:47:35.943138 | orchestrator | changed: [testbed-manager] 2026-01-05 00:47:35.943144 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:47:35.943150 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:47:35.943155 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:47:35.943161 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:47:35.943167 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:47:35.943174 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:47:35.943180 | orchestrator | 2026-01-05 00:47:35.943187 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:47:35.943193 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:47:35.943201 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:47:35.943208 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:47:35.943218 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:47:35.943230 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:47:35.943238 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:47:35.943242 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:47:35.943246 | orchestrator | 2026-01-05 00:47:35.943252 | orchestrator | 2026-01-05 00:47:35.943258 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:47:35.943291 | orchestrator | Monday 05 January 2026 00:47:34 +0000 (0:00:10.082) 0:02:38.498 ******** 2026-01-05 00:47:35.943302 | orchestrator | =============================================================================== 2026-01-05 00:47:35.943308 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 42.28s 2026-01-05 00:47:35.943314 | orchestrator | common : Restart fluentd container ------------------------------------- 39.37s 2026-01-05 00:47:35.943321 | orchestrator | common : Restart cron container ---------------------------------------- 10.08s 2026-01-05 00:47:35.943327 | orchestrator | common : Copying over config.json files for services -------------------- 9.05s 2026-01-05 00:47:35.943333 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.95s 2026-01-05 00:47:35.943340 | orchestrator | common : Check common containers ---------------------------------------- 4.17s 2026-01-05 00:47:35.943344 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.13s 2026-01-05 00:47:35.943347 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.04s 2026-01-05 00:47:35.943351 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.86s 2026-01-05 00:47:35.943355 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.75s 2026-01-05 00:47:35.943358 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.49s 2026-01-05 00:47:35.943362 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.42s 2026-01-05 00:47:35.943366 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.91s 2026-01-05 00:47:35.943370 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.76s 2026-01-05 00:47:35.943374 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.57s 2026-01-05 00:47:35.943378 | orchestrator | common : Creating log volume -------------------------------------------- 2.45s 2026-01-05 00:47:35.943381 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.17s 2026-01-05 00:47:35.943385 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.06s 2026-01-05 00:47:35.943389 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.73s 2026-01-05 00:47:35.943394 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.69s 2026-01-05 00:47:35.943397 | orchestrator | 2026-01-05 00:47:35 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:35.943402 | orchestrator | 2026-01-05 00:47:35 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:35.943406 | orchestrator | 2026-01-05 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:38.984470 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:47:38.984960 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:47:38.985603 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:38.988851 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:38.989492 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:47:38.990140 | orchestrator | 2026-01-05 00:47:38 | INFO  | Task 0f449f7d-45dd-4fa3-931e-36db65c6fb77 is in state STARTED 2026-01-05 00:47:38.990168 | orchestrator | 2026-01-05 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:42.019758 | orchestrator | 2026-01-05 00:47:42 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:47:42.021515 | orchestrator | 2026-01-05 00:47:42 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:47:42.022497 | orchestrator | 2026-01-05 00:47:42 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:42.023706 | orchestrator | 2026-01-05 00:47:42 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:42.024806 | orchestrator | 2026-01-05 00:47:42 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:47:42.025827 | orchestrator | 2026-01-05 00:47:42 | INFO  | Task 0f449f7d-45dd-4fa3-931e-36db65c6fb77 is in state STARTED 2026-01-05 00:47:42.025893 | orchestrator | 2026-01-05 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:45.075445 | orchestrator | 2026-01-05 00:47:45 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:47:45.075644 | orchestrator | 2026-01-05 00:47:45 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:47:45.076545 | orchestrator | 2026-01-05 00:47:45 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:45.077833 | orchestrator | 2026-01-05 00:47:45 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:45.080237 | orchestrator | 2026-01-05 00:47:45 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:47:45.080370 | orchestrator | 2026-01-05 00:47:45 | INFO  | Task 0f449f7d-45dd-4fa3-931e-36db65c6fb77 is in state STARTED 2026-01-05 00:47:45.080380 | orchestrator | 2026-01-05 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:48.173123 | orchestrator | 2026-01-05 00:47:48 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:47:48.174377 | orchestrator | 2026-01-05 00:47:48 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:47:48.177959 | orchestrator | 2026-01-05 00:47:48 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:48.179534 | orchestrator | 2026-01-05 00:47:48 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:48.180877 | orchestrator | 2026-01-05 00:47:48 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:47:48.183421 | orchestrator | 2026-01-05 00:47:48 | INFO  | Task 0f449f7d-45dd-4fa3-931e-36db65c6fb77 is in state STARTED 2026-01-05 00:47:48.183473 | orchestrator | 2026-01-05 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:51.229305 | orchestrator | 2026-01-05 00:47:51 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:47:51.229624 | orchestrator | 2026-01-05 00:47:51 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:47:51.230415 | orchestrator | 2026-01-05 00:47:51 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:51.231597 | orchestrator | 2026-01-05 00:47:51 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:51.232593 | orchestrator | 2026-01-05 00:47:51 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:47:51.234306 | orchestrator | 2026-01-05 00:47:51 | INFO  | Task 0f449f7d-45dd-4fa3-931e-36db65c6fb77 is in state STARTED 2026-01-05 00:47:51.234339 | orchestrator | 2026-01-05 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:54.292028 | orchestrator | 2026-01-05 00:47:54 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:47:54.292826 | orchestrator | 2026-01-05 00:47:54 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:47:54.293630 | orchestrator | 2026-01-05 00:47:54 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:54.294753 | orchestrator | 2026-01-05 00:47:54 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:54.296121 | orchestrator | 2026-01-05 00:47:54 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:47:54.296974 | orchestrator | 2026-01-05 00:47:54 | INFO  | Task 0f449f7d-45dd-4fa3-931e-36db65c6fb77 is in state STARTED 2026-01-05 00:47:54.297467 | orchestrator | 2026-01-05 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:47:57.341642 | orchestrator | 2026-01-05 00:47:57 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:47:57.342543 | orchestrator | 2026-01-05 00:47:57 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:47:57.343193 | orchestrator | 2026-01-05 00:47:57 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:47:57.347207 | orchestrator | 2026-01-05 00:47:57 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:47:57.347286 | orchestrator | 2026-01-05 00:47:57 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:47:57.348209 | orchestrator | 2026-01-05 00:47:57 | INFO  | Task 0f449f7d-45dd-4fa3-931e-36db65c6fb77 is in state STARTED 2026-01-05 00:47:57.348258 | orchestrator | 2026-01-05 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:00.401066 | orchestrator | 2026-01-05 00:48:00 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:00.407907 | orchestrator | 2026-01-05 00:48:00 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:48:00.412013 | orchestrator | 2026-01-05 00:48:00 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:00.416586 | orchestrator | 2026-01-05 00:48:00 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:00.428863 | orchestrator | 2026-01-05 00:48:00 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:00.432338 | orchestrator | 2026-01-05 00:48:00 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:00.435473 | orchestrator | 2026-01-05 00:48:00 | INFO  | Task 0f449f7d-45dd-4fa3-931e-36db65c6fb77 is in state SUCCESS 2026-01-05 00:48:00.437775 | orchestrator | 2026-01-05 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:03.505435 | orchestrator | 2026-01-05 00:48:03 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:03.505516 | orchestrator | 2026-01-05 00:48:03 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:48:03.505549 | orchestrator | 2026-01-05 00:48:03 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:03.505554 | orchestrator | 2026-01-05 00:48:03 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:03.505558 | orchestrator | 2026-01-05 00:48:03 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:03.505562 | orchestrator | 2026-01-05 00:48:03 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:03.505566 | orchestrator | 2026-01-05 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:06.551579 | orchestrator | 2026-01-05 00:48:06 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:06.552554 | orchestrator | 2026-01-05 00:48:06 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:48:06.553729 | orchestrator | 2026-01-05 00:48:06 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:06.554920 | orchestrator | 2026-01-05 00:48:06 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:06.556066 | orchestrator | 2026-01-05 00:48:06 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:06.557222 | orchestrator | 2026-01-05 00:48:06 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:06.557262 | orchestrator | 2026-01-05 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:09.618219 | orchestrator | 2026-01-05 00:48:09 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:09.620693 | orchestrator | 2026-01-05 00:48:09 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:48:09.623977 | orchestrator | 2026-01-05 00:48:09 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:09.625850 | orchestrator | 2026-01-05 00:48:09 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:09.626626 | orchestrator | 2026-01-05 00:48:09 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:09.627730 | orchestrator | 2026-01-05 00:48:09 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:09.627753 | orchestrator | 2026-01-05 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:12.671362 | orchestrator | 2026-01-05 00:48:12 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:12.671962 | orchestrator | 2026-01-05 00:48:12 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state STARTED 2026-01-05 00:48:12.673711 | orchestrator | 2026-01-05 00:48:12 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:12.674621 | orchestrator | 2026-01-05 00:48:12 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:12.677448 | orchestrator | 2026-01-05 00:48:12 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:12.678307 | orchestrator | 2026-01-05 00:48:12 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:12.678343 | orchestrator | 2026-01-05 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:15.720416 | orchestrator | 2026-01-05 00:48:15 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:15.720520 | orchestrator | 2026-01-05 00:48:15 | INFO  | Task 9ffacec8-16f9-43a3-8dea-6ed92917f7c4 is in state SUCCESS 2026-01-05 00:48:15.722090 | orchestrator | 2026-01-05 00:48:15.722162 | orchestrator | 2026-01-05 00:48:15.722170 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:48:15.722175 | orchestrator | 2026-01-05 00:48:15.722180 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:48:15.722184 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.351) 0:00:00.351 ******** 2026-01-05 00:48:15.722202 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:15.722208 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:15.722212 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:15.722215 | orchestrator | 2026-01-05 00:48:15.722219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:48:15.722223 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.375) 0:00:00.727 ******** 2026-01-05 00:48:15.722299 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-05 00:48:15.722303 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-05 00:48:15.722308 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-05 00:48:15.722312 | orchestrator | 2026-01-05 00:48:15.722316 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-05 00:48:15.722319 | orchestrator | 2026-01-05 00:48:15.722324 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-05 00:48:15.722328 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:01.243) 0:00:01.970 ******** 2026-01-05 00:48:15.722332 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:48:15.722338 | orchestrator | 2026-01-05 00:48:15.722341 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-05 00:48:15.722345 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.551) 0:00:02.522 ******** 2026-01-05 00:48:15.722349 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-05 00:48:15.722353 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-05 00:48:15.722357 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-05 00:48:15.722361 | orchestrator | 2026-01-05 00:48:15.722364 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-05 00:48:15.722368 | orchestrator | Monday 05 January 2026 00:47:44 +0000 (0:00:01.084) 0:00:03.606 ******** 2026-01-05 00:48:15.722372 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-05 00:48:15.722376 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-05 00:48:15.722379 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-05 00:48:15.722383 | orchestrator | 2026-01-05 00:48:15.722387 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-05 00:48:15.722391 | orchestrator | Monday 05 January 2026 00:47:47 +0000 (0:00:03.338) 0:00:06.944 ******** 2026-01-05 00:48:15.722395 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:15.722398 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:15.722402 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:15.722406 | orchestrator | 2026-01-05 00:48:15.722409 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-05 00:48:15.722413 | orchestrator | Monday 05 January 2026 00:47:49 +0000 (0:00:02.168) 0:00:09.113 ******** 2026-01-05 00:48:15.722417 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:15.722420 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:15.722424 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:15.722428 | orchestrator | 2026-01-05 00:48:15.722431 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:48:15.722436 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:48:15.722441 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:48:15.722445 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:48:15.722454 | orchestrator | 2026-01-05 00:48:15.722458 | orchestrator | 2026-01-05 00:48:15.722462 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:48:15.722466 | orchestrator | Monday 05 January 2026 00:47:57 +0000 (0:00:07.887) 0:00:17.000 ******** 2026-01-05 00:48:15.722469 | orchestrator | =============================================================================== 2026-01-05 00:48:15.722473 | orchestrator | memcached : Restart memcached container --------------------------------- 7.89s 2026-01-05 00:48:15.722477 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.34s 2026-01-05 00:48:15.722480 | orchestrator | memcached : Check memcached container ----------------------------------- 2.17s 2026-01-05 00:48:15.722484 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.24s 2026-01-05 00:48:15.722488 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.08s 2026-01-05 00:48:15.722491 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.55s 2026-01-05 00:48:15.722495 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-01-05 00:48:15.722499 | orchestrator | 2026-01-05 00:48:15.722502 | orchestrator | 2026-01-05 00:48:15.722506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:48:15.722510 | orchestrator | 2026-01-05 00:48:15.722517 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:48:15.722521 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.408) 0:00:00.408 ******** 2026-01-05 00:48:15.722525 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:15.722528 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:15.722532 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:15.722536 | orchestrator | 2026-01-05 00:48:15.722540 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:48:15.722555 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:00.840) 0:00:01.249 ******** 2026-01-05 00:48:15.722559 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-05 00:48:15.722563 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-05 00:48:15.722567 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-05 00:48:15.722570 | orchestrator | 2026-01-05 00:48:15.722574 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-05 00:48:15.722578 | orchestrator | 2026-01-05 00:48:15.722581 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-05 00:48:15.722585 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.839) 0:00:02.088 ******** 2026-01-05 00:48:15.722589 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:48:15.722593 | orchestrator | 2026-01-05 00:48:15.722597 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-05 00:48:15.722600 | orchestrator | Monday 05 January 2026 00:47:44 +0000 (0:00:01.202) 0:00:03.290 ******** 2026-01-05 00:48:15.722606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722645 | orchestrator | 2026-01-05 00:48:15.722649 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-05 00:48:15.722653 | orchestrator | Monday 05 January 2026 00:47:46 +0000 (0:00:01.845) 0:00:05.136 ******** 2026-01-05 00:48:15.722657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722693 | orchestrator | 2026-01-05 00:48:15.722697 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-05 00:48:15.722701 | orchestrator | Monday 05 January 2026 00:47:49 +0000 (0:00:03.455) 0:00:08.592 ******** 2026-01-05 00:48:15.722705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722740 | orchestrator | 2026-01-05 00:48:15.722744 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-05 00:48:15.722748 | orchestrator | Monday 05 January 2026 00:47:52 +0000 (0:00:03.297) 0:00:11.890 ******** 2026-01-05 00:48:15.722752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:48:15.722784 | orchestrator | 2026-01-05 00:48:15.722788 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 00:48:15.722792 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:02.166) 0:00:14.056 ******** 2026-01-05 00:48:15.722796 | orchestrator | 2026-01-05 00:48:15.722800 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 00:48:15.722803 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:00.080) 0:00:14.137 ******** 2026-01-05 00:48:15.722807 | orchestrator | 2026-01-05 00:48:15.722814 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 00:48:15.722818 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:00.091) 0:00:14.228 ******** 2026-01-05 00:48:15.722821 | orchestrator | 2026-01-05 00:48:15.722825 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-05 00:48:15.722829 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:00.078) 0:00:14.306 ******** 2026-01-05 00:48:15.722833 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:15.722836 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:15.722840 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:15.722844 | orchestrator | 2026-01-05 00:48:15.722848 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-05 00:48:15.722851 | orchestrator | Monday 05 January 2026 00:48:06 +0000 (0:00:10.944) 0:00:25.251 ******** 2026-01-05 00:48:15.722855 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:15.722859 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:15.722862 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:15.722866 | orchestrator | 2026-01-05 00:48:15.722870 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:48:15.722874 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:48:15.722878 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:48:15.722881 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:48:15.722885 | orchestrator | 2026-01-05 00:48:15.722889 | orchestrator | 2026-01-05 00:48:15.722893 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:48:15.722896 | orchestrator | Monday 05 January 2026 00:48:12 +0000 (0:00:05.931) 0:00:31.182 ******** 2026-01-05 00:48:15.722900 | orchestrator | =============================================================================== 2026-01-05 00:48:15.722904 | orchestrator | redis : Restart redis container ---------------------------------------- 10.94s 2026-01-05 00:48:15.722908 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.93s 2026-01-05 00:48:15.722912 | orchestrator | redis : Copying over default config.json files -------------------------- 3.45s 2026-01-05 00:48:15.722915 | orchestrator | redis : Copying over redis config files --------------------------------- 3.30s 2026-01-05 00:48:15.722919 | orchestrator | redis : Check redis containers ------------------------------------------ 2.17s 2026-01-05 00:48:15.722923 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.85s 2026-01-05 00:48:15.722927 | orchestrator | redis : include_tasks --------------------------------------------------- 1.20s 2026-01-05 00:48:15.722931 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-01-05 00:48:15.722934 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-01-05 00:48:15.722938 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.25s 2026-01-05 00:48:15.722942 | orchestrator | 2026-01-05 00:48:15 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:15.723611 | orchestrator | 2026-01-05 00:48:15 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:15.724040 | orchestrator | 2026-01-05 00:48:15 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:15.727120 | orchestrator | 2026-01-05 00:48:15 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:15.727170 | orchestrator | 2026-01-05 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:18.791440 | orchestrator | 2026-01-05 00:48:18 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:18.792677 | orchestrator | 2026-01-05 00:48:18 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:18.794838 | orchestrator | 2026-01-05 00:48:18 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:18.796160 | orchestrator | 2026-01-05 00:48:18 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:18.797443 | orchestrator | 2026-01-05 00:48:18 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:18.797472 | orchestrator | 2026-01-05 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:21.847209 | orchestrator | 2026-01-05 00:48:21 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:21.847305 | orchestrator | 2026-01-05 00:48:21 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:21.847311 | orchestrator | 2026-01-05 00:48:21 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:21.847315 | orchestrator | 2026-01-05 00:48:21 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:21.847319 | orchestrator | 2026-01-05 00:48:21 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:21.847324 | orchestrator | 2026-01-05 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:24.878271 | orchestrator | 2026-01-05 00:48:24 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:24.883979 | orchestrator | 2026-01-05 00:48:24 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:24.888265 | orchestrator | 2026-01-05 00:48:24 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:24.895972 | orchestrator | 2026-01-05 00:48:24 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:24.901862 | orchestrator | 2026-01-05 00:48:24 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:24.901940 | orchestrator | 2026-01-05 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:28.265311 | orchestrator | 2026-01-05 00:48:28 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:28.265374 | orchestrator | 2026-01-05 00:48:28 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:28.265380 | orchestrator | 2026-01-05 00:48:28 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:28.265386 | orchestrator | 2026-01-05 00:48:28 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:28.265391 | orchestrator | 2026-01-05 00:48:28 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:28.265396 | orchestrator | 2026-01-05 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:31.200398 | orchestrator | 2026-01-05 00:48:31 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:31.203154 | orchestrator | 2026-01-05 00:48:31 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:31.205790 | orchestrator | 2026-01-05 00:48:31 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:31.208505 | orchestrator | 2026-01-05 00:48:31 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:31.210309 | orchestrator | 2026-01-05 00:48:31 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:31.210427 | orchestrator | 2026-01-05 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:34.390888 | orchestrator | 2026-01-05 00:48:34 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:34.395783 | orchestrator | 2026-01-05 00:48:34 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:34.407517 | orchestrator | 2026-01-05 00:48:34 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:34.411798 | orchestrator | 2026-01-05 00:48:34 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:34.415343 | orchestrator | 2026-01-05 00:48:34 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:34.415407 | orchestrator | 2026-01-05 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:37.478623 | orchestrator | 2026-01-05 00:48:37 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:37.478716 | orchestrator | 2026-01-05 00:48:37 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:37.478743 | orchestrator | 2026-01-05 00:48:37 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:37.478751 | orchestrator | 2026-01-05 00:48:37 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:37.480358 | orchestrator | 2026-01-05 00:48:37 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:37.480400 | orchestrator | 2026-01-05 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:40.559360 | orchestrator | 2026-01-05 00:48:40 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:40.559450 | orchestrator | 2026-01-05 00:48:40 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:40.559853 | orchestrator | 2026-01-05 00:48:40 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:40.561216 | orchestrator | 2026-01-05 00:48:40 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:40.562384 | orchestrator | 2026-01-05 00:48:40 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:40.562400 | orchestrator | 2026-01-05 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:43.598741 | orchestrator | 2026-01-05 00:48:43 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:43.598885 | orchestrator | 2026-01-05 00:48:43 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:43.598904 | orchestrator | 2026-01-05 00:48:43 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:43.598917 | orchestrator | 2026-01-05 00:48:43 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:43.598928 | orchestrator | 2026-01-05 00:48:43 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:43.598940 | orchestrator | 2026-01-05 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:46.630427 | orchestrator | 2026-01-05 00:48:46 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:46.632978 | orchestrator | 2026-01-05 00:48:46 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:46.634270 | orchestrator | 2026-01-05 00:48:46 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:46.635673 | orchestrator | 2026-01-05 00:48:46 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:46.637037 | orchestrator | 2026-01-05 00:48:46 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:46.638513 | orchestrator | 2026-01-05 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:49.683697 | orchestrator | 2026-01-05 00:48:49 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:49.684002 | orchestrator | 2026-01-05 00:48:49 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:49.685037 | orchestrator | 2026-01-05 00:48:49 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:49.686659 | orchestrator | 2026-01-05 00:48:49 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:49.693688 | orchestrator | 2026-01-05 00:48:49 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:49.693757 | orchestrator | 2026-01-05 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:52.736883 | orchestrator | 2026-01-05 00:48:52 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:52.737825 | orchestrator | 2026-01-05 00:48:52 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:52.738400 | orchestrator | 2026-01-05 00:48:52 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:52.739098 | orchestrator | 2026-01-05 00:48:52 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:52.739936 | orchestrator | 2026-01-05 00:48:52 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:52.739994 | orchestrator | 2026-01-05 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:55.768522 | orchestrator | 2026-01-05 00:48:55 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state STARTED 2026-01-05 00:48:55.768677 | orchestrator | 2026-01-05 00:48:55 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:55.768737 | orchestrator | 2026-01-05 00:48:55 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:55.771808 | orchestrator | 2026-01-05 00:48:55 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:55.772951 | orchestrator | 2026-01-05 00:48:55 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:55.773062 | orchestrator | 2026-01-05 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:48:58.824673 | orchestrator | 2026-01-05 00:48:58 | INFO  | Task f4c4a939-6e23-4c0b-907d-66ac42e04fc3 is in state SUCCESS 2026-01-05 00:48:58.826602 | orchestrator | 2026-01-05 00:48:58.826657 | orchestrator | 2026-01-05 00:48:58.826664 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:48:58.826672 | orchestrator | 2026-01-05 00:48:58.826679 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:48:58.826686 | orchestrator | Monday 05 January 2026 00:47:40 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-01-05 00:48:58.826693 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:58.826701 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:58.826707 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:58.826713 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:48:58.826720 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:48:58.826726 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:48:58.826732 | orchestrator | 2026-01-05 00:48:58.826739 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:48:58.826765 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.818) 0:00:01.097 ******** 2026-01-05 00:48:58.826772 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:48:58.826778 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:48:58.826784 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:48:58.826790 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:48:58.826796 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:48:58.826802 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:48:58.826808 | orchestrator | 2026-01-05 00:48:58.826815 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-05 00:48:58.826822 | orchestrator | 2026-01-05 00:48:58.826826 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-05 00:48:58.826829 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:01.375) 0:00:02.473 ******** 2026-01-05 00:48:58.826835 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:48:58.826840 | orchestrator | 2026-01-05 00:48:58.826843 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 00:48:58.826847 | orchestrator | Monday 05 January 2026 00:47:45 +0000 (0:00:02.731) 0:00:05.204 ******** 2026-01-05 00:48:58.826851 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-05 00:48:58.826855 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-05 00:48:58.826859 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-05 00:48:58.826863 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-05 00:48:58.826866 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-05 00:48:58.826870 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-05 00:48:58.826873 | orchestrator | 2026-01-05 00:48:58.826877 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 00:48:58.826881 | orchestrator | Monday 05 January 2026 00:47:48 +0000 (0:00:02.645) 0:00:07.850 ******** 2026-01-05 00:48:58.826885 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-05 00:48:58.826888 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-05 00:48:58.826892 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-05 00:48:58.826897 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-05 00:48:58.826900 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-05 00:48:58.826904 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-05 00:48:58.826908 | orchestrator | 2026-01-05 00:48:58.826912 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 00:48:58.826915 | orchestrator | Monday 05 January 2026 00:47:50 +0000 (0:00:02.094) 0:00:09.944 ******** 2026-01-05 00:48:58.826919 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-05 00:48:58.826923 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:58.826927 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-05 00:48:58.826931 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:58.826935 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-05 00:48:58.826938 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:58.826942 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-05 00:48:58.826946 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:58.826949 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-05 00:48:58.826953 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:58.826957 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-05 00:48:58.826960 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:58.826970 | orchestrator | 2026-01-05 00:48:58.826975 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-05 00:48:58.826981 | orchestrator | Monday 05 January 2026 00:47:52 +0000 (0:00:02.300) 0:00:12.245 ******** 2026-01-05 00:48:58.826987 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:58.827000 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:58.827008 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:58.827012 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:58.827016 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:58.827019 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:58.827023 | orchestrator | 2026-01-05 00:48:58.827027 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-05 00:48:58.827031 | orchestrator | Monday 05 January 2026 00:47:53 +0000 (0:00:01.218) 0:00:13.464 ******** 2026-01-05 00:48:58.827048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827124 | orchestrator | 2026-01-05 00:48:58.827130 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-05 00:48:58.827137 | orchestrator | Monday 05 January 2026 00:47:56 +0000 (0:00:02.203) 0:00:15.667 ******** 2026-01-05 00:48:58.827143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827295 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827314 | orchestrator | 2026-01-05 00:48:58.827335 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-05 00:48:58.827354 | orchestrator | Monday 05 January 2026 00:48:00 +0000 (0:00:04.646) 0:00:20.314 ******** 2026-01-05 00:48:58.827376 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:58.827397 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:58.827411 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:58.827426 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:58.827468 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:58.827480 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:58.827492 | orchestrator | 2026-01-05 00:48:58.827504 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-05 00:48:58.827518 | orchestrator | Monday 05 January 2026 00:48:02 +0000 (0:00:01.860) 0:00:22.174 ******** 2026-01-05 00:48:58.827533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827743 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:48:58.827784 | orchestrator | 2026-01-05 00:48:58.827801 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:48:58.827816 | orchestrator | Monday 05 January 2026 00:48:06 +0000 (0:00:03.874) 0:00:26.048 ******** 2026-01-05 00:48:58.827833 | orchestrator | 2026-01-05 00:48:58.827849 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:48:58.827863 | orchestrator | Monday 05 January 2026 00:48:07 +0000 (0:00:00.735) 0:00:26.784 ******** 2026-01-05 00:48:58.827879 | orchestrator | 2026-01-05 00:48:58.827895 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:48:58.827910 | orchestrator | Monday 05 January 2026 00:48:07 +0000 (0:00:00.396) 0:00:27.181 ******** 2026-01-05 00:48:58.827924 | orchestrator | 2026-01-05 00:48:58.827936 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:48:58.827951 | orchestrator | Monday 05 January 2026 00:48:07 +0000 (0:00:00.281) 0:00:27.463 ******** 2026-01-05 00:48:58.827968 | orchestrator | 2026-01-05 00:48:58.827984 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:48:58.828000 | orchestrator | Monday 05 January 2026 00:48:08 +0000 (0:00:00.505) 0:00:27.968 ******** 2026-01-05 00:48:58.828015 | orchestrator | 2026-01-05 00:48:58.828020 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:48:58.828026 | orchestrator | Monday 05 January 2026 00:48:08 +0000 (0:00:00.459) 0:00:28.428 ******** 2026-01-05 00:48:58.828032 | orchestrator | 2026-01-05 00:48:58.828037 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-05 00:48:58.828044 | orchestrator | Monday 05 January 2026 00:48:08 +0000 (0:00:00.170) 0:00:28.598 ******** 2026-01-05 00:48:58.828050 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:58.828056 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:58.828062 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:58.828068 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:58.828073 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:58.828079 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:58.828084 | orchestrator | 2026-01-05 00:48:58.828089 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-05 00:48:58.828095 | orchestrator | Monday 05 January 2026 00:48:20 +0000 (0:00:11.350) 0:00:39.948 ******** 2026-01-05 00:48:58.828101 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:58.828109 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:58.828115 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:58.828120 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:48:58.828126 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:48:58.828132 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:48:58.828138 | orchestrator | 2026-01-05 00:48:58.828150 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-05 00:48:58.828157 | orchestrator | Monday 05 January 2026 00:48:22 +0000 (0:00:02.184) 0:00:42.132 ******** 2026-01-05 00:48:58.828162 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:58.828168 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:58.828175 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:58.828208 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:58.828212 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:58.828216 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:58.828220 | orchestrator | 2026-01-05 00:48:58.828223 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-05 00:48:58.828227 | orchestrator | Monday 05 January 2026 00:48:33 +0000 (0:00:10.621) 0:00:52.754 ******** 2026-01-05 00:48:58.828238 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-05 00:48:58.828250 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-05 00:48:58.828254 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-05 00:48:58.828257 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-05 00:48:58.828261 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-05 00:48:58.828265 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-05 00:48:58.828269 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-05 00:48:58.828273 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-05 00:48:58.828277 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-05 00:48:58.828280 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-05 00:48:58.828284 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-05 00:48:58.828288 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-05 00:48:58.828291 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:48:58.828295 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:48:58.828299 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:48:58.828302 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:48:58.828306 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:48:58.828310 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:48:58.828314 | orchestrator | 2026-01-05 00:48:58.828317 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-05 00:48:58.828321 | orchestrator | Monday 05 January 2026 00:48:41 +0000 (0:00:08.232) 0:01:00.986 ******** 2026-01-05 00:48:58.828325 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-05 00:48:58.828329 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:58.828333 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-05 00:48:58.828337 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:58.828340 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-05 00:48:58.828344 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:58.828348 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-05 00:48:58.828352 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-05 00:48:58.828355 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-05 00:48:58.828359 | orchestrator | 2026-01-05 00:48:58.828363 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-05 00:48:58.828367 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:03.049) 0:01:04.036 ******** 2026-01-05 00:48:58.828371 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-05 00:48:58.828374 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:58.828378 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-05 00:48:58.828382 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:58.828385 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-05 00:48:58.828395 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:58.828399 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-05 00:48:58.828403 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-05 00:48:58.828407 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-05 00:48:58.828410 | orchestrator | 2026-01-05 00:48:58.828414 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-05 00:48:58.828421 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:03.514) 0:01:07.550 ******** 2026-01-05 00:48:58.828426 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:48:58.828429 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:48:58.828433 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:48:58.828437 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:58.828441 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:58.828444 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:58.828448 | orchestrator | 2026-01-05 00:48:58.828452 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:48:58.828456 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:48:58.828464 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:48:58.828468 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:48:58.828472 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 00:48:58.828476 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 00:48:58.828479 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 00:48:58.828483 | orchestrator | 2026-01-05 00:48:58.828487 | orchestrator | 2026-01-05 00:48:58.828491 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:48:58.828495 | orchestrator | Monday 05 January 2026 00:48:57 +0000 (0:00:09.329) 0:01:16.880 ******** 2026-01-05 00:48:58.828498 | orchestrator | =============================================================================== 2026-01-05 00:48:58.828502 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.95s 2026-01-05 00:48:58.828506 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.35s 2026-01-05 00:48:58.828509 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.23s 2026-01-05 00:48:58.828513 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.64s 2026-01-05 00:48:58.828517 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.88s 2026-01-05 00:48:58.828521 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.51s 2026-01-05 00:48:58.828524 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.05s 2026-01-05 00:48:58.828528 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.73s 2026-01-05 00:48:58.828532 | orchestrator | module-load : Load modules ---------------------------------------------- 2.65s 2026-01-05 00:48:58.828535 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.55s 2026-01-05 00:48:58.828539 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.30s 2026-01-05 00:48:58.828543 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.21s 2026-01-05 00:48:58.828547 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.18s 2026-01-05 00:48:58.828553 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.09s 2026-01-05 00:48:58.828557 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.86s 2026-01-05 00:48:58.828560 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.38s 2026-01-05 00:48:58.828564 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.22s 2026-01-05 00:48:58.828568 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-01-05 00:48:58.830619 | orchestrator | 2026-01-05 00:48:58 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:48:58.832088 | orchestrator | 2026-01-05 00:48:58 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:48:58.832701 | orchestrator | 2026-01-05 00:48:58 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:48:58.833581 | orchestrator | 2026-01-05 00:48:58 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:48:58.834985 | orchestrator | 2026-01-05 00:48:58 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:48:58.835142 | orchestrator | 2026-01-05 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:01.870852 | orchestrator | 2026-01-05 00:49:01 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:01.871533 | orchestrator | 2026-01-05 00:49:01 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:01.872607 | orchestrator | 2026-01-05 00:49:01 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:01.874295 | orchestrator | 2026-01-05 00:49:01 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:01.874794 | orchestrator | 2026-01-05 00:49:01 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:01.875214 | orchestrator | 2026-01-05 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:04.907292 | orchestrator | 2026-01-05 00:49:04 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:04.907393 | orchestrator | 2026-01-05 00:49:04 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:04.907432 | orchestrator | 2026-01-05 00:49:04 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:04.908287 | orchestrator | 2026-01-05 00:49:04 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:04.908944 | orchestrator | 2026-01-05 00:49:04 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:04.908954 | orchestrator | 2026-01-05 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:07.945967 | orchestrator | 2026-01-05 00:49:07 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:07.946422 | orchestrator | 2026-01-05 00:49:07 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:07.947317 | orchestrator | 2026-01-05 00:49:07 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:07.947963 | orchestrator | 2026-01-05 00:49:07 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:07.948896 | orchestrator | 2026-01-05 00:49:07 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:07.948929 | orchestrator | 2026-01-05 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:10.989292 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:10.989461 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:10.991569 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:10.992205 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:10.993086 | orchestrator | 2026-01-05 00:49:10 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:10.993124 | orchestrator | 2026-01-05 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:14.032649 | orchestrator | 2026-01-05 00:49:14 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:14.033206 | orchestrator | 2026-01-05 00:49:14 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:14.034393 | orchestrator | 2026-01-05 00:49:14 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:14.036528 | orchestrator | 2026-01-05 00:49:14 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:14.038957 | orchestrator | 2026-01-05 00:49:14 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:14.039001 | orchestrator | 2026-01-05 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:17.077350 | orchestrator | 2026-01-05 00:49:17 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:17.077463 | orchestrator | 2026-01-05 00:49:17 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:17.078500 | orchestrator | 2026-01-05 00:49:17 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:17.079595 | orchestrator | 2026-01-05 00:49:17 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:17.080434 | orchestrator | 2026-01-05 00:49:17 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:17.080459 | orchestrator | 2026-01-05 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:20.113353 | orchestrator | 2026-01-05 00:49:20 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:20.115542 | orchestrator | 2026-01-05 00:49:20 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:20.117358 | orchestrator | 2026-01-05 00:49:20 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:20.118767 | orchestrator | 2026-01-05 00:49:20 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:20.119748 | orchestrator | 2026-01-05 00:49:20 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:20.119798 | orchestrator | 2026-01-05 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:23.160079 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:23.161689 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:23.163261 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:23.165657 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:23.167224 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:23.167304 | orchestrator | 2026-01-05 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:26.202739 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:26.203363 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:26.203981 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:26.205037 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:26.206269 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:26.206350 | orchestrator | 2026-01-05 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:29.242064 | orchestrator | 2026-01-05 00:49:29 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:29.245767 | orchestrator | 2026-01-05 00:49:29 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:29.247627 | orchestrator | 2026-01-05 00:49:29 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:29.250494 | orchestrator | 2026-01-05 00:49:29 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:29.251301 | orchestrator | 2026-01-05 00:49:29 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:29.251353 | orchestrator | 2026-01-05 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:32.378650 | orchestrator | 2026-01-05 00:49:32 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:32.378760 | orchestrator | 2026-01-05 00:49:32 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:32.378772 | orchestrator | 2026-01-05 00:49:32 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:32.378780 | orchestrator | 2026-01-05 00:49:32 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:32.378789 | orchestrator | 2026-01-05 00:49:32 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:32.378797 | orchestrator | 2026-01-05 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:35.366275 | orchestrator | 2026-01-05 00:49:35 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:35.366362 | orchestrator | 2026-01-05 00:49:35 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:35.366602 | orchestrator | 2026-01-05 00:49:35 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:35.367521 | orchestrator | 2026-01-05 00:49:35 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:35.368641 | orchestrator | 2026-01-05 00:49:35 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:35.368720 | orchestrator | 2026-01-05 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:38.402241 | orchestrator | 2026-01-05 00:49:38 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:38.403443 | orchestrator | 2026-01-05 00:49:38 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:38.405200 | orchestrator | 2026-01-05 00:49:38 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:38.407303 | orchestrator | 2026-01-05 00:49:38 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:38.408334 | orchestrator | 2026-01-05 00:49:38 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:38.408370 | orchestrator | 2026-01-05 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:41.446009 | orchestrator | 2026-01-05 00:49:41 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:41.449068 | orchestrator | 2026-01-05 00:49:41 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:41.452797 | orchestrator | 2026-01-05 00:49:41 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:41.454965 | orchestrator | 2026-01-05 00:49:41 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:41.457912 | orchestrator | 2026-01-05 00:49:41 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:41.458785 | orchestrator | 2026-01-05 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:44.496061 | orchestrator | 2026-01-05 00:49:44 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:44.497366 | orchestrator | 2026-01-05 00:49:44 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:44.498790 | orchestrator | 2026-01-05 00:49:44 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:44.499704 | orchestrator | 2026-01-05 00:49:44 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:44.500574 | orchestrator | 2026-01-05 00:49:44 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:44.500607 | orchestrator | 2026-01-05 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:47.541005 | orchestrator | 2026-01-05 00:49:47 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:47.541122 | orchestrator | 2026-01-05 00:49:47 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:47.541163 | orchestrator | 2026-01-05 00:49:47 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:47.541175 | orchestrator | 2026-01-05 00:49:47 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:47.541185 | orchestrator | 2026-01-05 00:49:47 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:47.541196 | orchestrator | 2026-01-05 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:50.576758 | orchestrator | 2026-01-05 00:49:50 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:50.577449 | orchestrator | 2026-01-05 00:49:50 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:50.578209 | orchestrator | 2026-01-05 00:49:50 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:50.578971 | orchestrator | 2026-01-05 00:49:50 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:50.579703 | orchestrator | 2026-01-05 00:49:50 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:50.579958 | orchestrator | 2026-01-05 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:53.623590 | orchestrator | 2026-01-05 00:49:53 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:53.624012 | orchestrator | 2026-01-05 00:49:53 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:53.624791 | orchestrator | 2026-01-05 00:49:53 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:53.625191 | orchestrator | 2026-01-05 00:49:53 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:53.626006 | orchestrator | 2026-01-05 00:49:53 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:53.626081 | orchestrator | 2026-01-05 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:56.724762 | orchestrator | 2026-01-05 00:49:56 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:56.726002 | orchestrator | 2026-01-05 00:49:56 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:56.726068 | orchestrator | 2026-01-05 00:49:56 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:56.726156 | orchestrator | 2026-01-05 00:49:56 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:56.727298 | orchestrator | 2026-01-05 00:49:56 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:56.727390 | orchestrator | 2026-01-05 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:59.755515 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:49:59.755725 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:49:59.756496 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state STARTED 2026-01-05 00:49:59.758638 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:49:59.759440 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:49:59.759471 | orchestrator | 2026-01-05 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:02.796191 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task eb26cbd4-d415-40b3-9e71-da21380972a9 is in state STARTED 2026-01-05 00:50:02.796502 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:02.800761 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:02.801929 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 26648a9d-ed1a-48f6-a2ab-0a9b770f04b1 is in state SUCCESS 2026-01-05 00:50:02.803468 | orchestrator | 2026-01-05 00:50:02.803520 | orchestrator | 2026-01-05 00:50:02.803539 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-05 00:50:02.803553 | orchestrator | 2026-01-05 00:50:02.803566 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-05 00:50:02.803580 | orchestrator | Monday 05 January 2026 00:44:56 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-01-05 00:50:02.803609 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:02.803634 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:02.803647 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:02.803660 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.803672 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.803680 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.803688 | orchestrator | 2026-01-05 00:50:02.803698 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-05 00:50:02.803715 | orchestrator | Monday 05 January 2026 00:44:57 +0000 (0:00:00.817) 0:00:01.088 ******** 2026-01-05 00:50:02.803762 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.803775 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.803787 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.803798 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.803810 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.803822 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.803830 | orchestrator | 2026-01-05 00:50:02.804470 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-05 00:50:02.804510 | orchestrator | Monday 05 January 2026 00:44:58 +0000 (0:00:00.697) 0:00:01.785 ******** 2026-01-05 00:50:02.804522 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.804534 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.804545 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.804558 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.804569 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.804581 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.804593 | orchestrator | 2026-01-05 00:50:02.804606 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-05 00:50:02.804616 | orchestrator | Monday 05 January 2026 00:44:59 +0000 (0:00:00.634) 0:00:02.420 ******** 2026-01-05 00:50:02.804624 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.804631 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.804638 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:02.804645 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:02.804652 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.804659 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:02.804666 | orchestrator | 2026-01-05 00:50:02.804674 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-05 00:50:02.804686 | orchestrator | Monday 05 January 2026 00:45:01 +0000 (0:00:02.359) 0:00:04.780 ******** 2026-01-05 00:50:02.804693 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:02.804700 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:02.804708 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:02.804715 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.804722 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.804730 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.804737 | orchestrator | 2026-01-05 00:50:02.804745 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-05 00:50:02.804752 | orchestrator | Monday 05 January 2026 00:45:02 +0000 (0:00:01.013) 0:00:05.794 ******** 2026-01-05 00:50:02.804759 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:02.804766 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:02.804773 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:02.804781 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.804788 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.804795 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.804802 | orchestrator | 2026-01-05 00:50:02.804809 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-05 00:50:02.804816 | orchestrator | Monday 05 January 2026 00:45:03 +0000 (0:00:00.918) 0:00:06.712 ******** 2026-01-05 00:50:02.804824 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.804831 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.804838 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.804845 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.804852 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.804859 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.804866 | orchestrator | 2026-01-05 00:50:02.804873 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-05 00:50:02.804881 | orchestrator | Monday 05 January 2026 00:45:04 +0000 (0:00:00.707) 0:00:07.419 ******** 2026-01-05 00:50:02.804888 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.804895 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.804902 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.804923 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.804930 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.804937 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.804944 | orchestrator | 2026-01-05 00:50:02.804951 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-05 00:50:02.804959 | orchestrator | Monday 05 January 2026 00:45:04 +0000 (0:00:00.720) 0:00:08.140 ******** 2026-01-05 00:50:02.804966 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:02.804975 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:02.804982 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.804989 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:02.804997 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:02.805007 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.805020 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:02.805031 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:02.805043 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.805055 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:02.805085 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:02.805098 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.805145 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:02.805159 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:02.805172 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.805234 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:50:02.805247 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:50:02.805267 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.805279 | orchestrator | 2026-01-05 00:50:02.805290 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-05 00:50:02.805302 | orchestrator | Monday 05 January 2026 00:45:05 +0000 (0:00:00.629) 0:00:08.769 ******** 2026-01-05 00:50:02.805314 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.805326 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.805338 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.805348 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.805356 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.805363 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.805370 | orchestrator | 2026-01-05 00:50:02.805377 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-05 00:50:02.805386 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:01.398) 0:00:10.168 ******** 2026-01-05 00:50:02.805393 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:02.805401 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:02.805408 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:02.805415 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.805422 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.805430 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.805437 | orchestrator | 2026-01-05 00:50:02.805444 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-05 00:50:02.805451 | orchestrator | Monday 05 January 2026 00:45:07 +0000 (0:00:00.788) 0:00:10.957 ******** 2026-01-05 00:50:02.805458 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.805466 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:02.805473 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:02.805480 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.805487 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:02.805504 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.805511 | orchestrator | 2026-01-05 00:50:02.805519 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-05 00:50:02.805531 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:06.066) 0:00:17.023 ******** 2026-01-05 00:50:02.805538 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.805545 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.805552 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.805560 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.805567 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.805574 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.805581 | orchestrator | 2026-01-05 00:50:02.805589 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-05 00:50:02.805596 | orchestrator | Monday 05 January 2026 00:45:15 +0000 (0:00:01.726) 0:00:18.750 ******** 2026-01-05 00:50:02.805603 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.805610 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.805617 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.805625 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.805632 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.805639 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.805646 | orchestrator | 2026-01-05 00:50:02.805654 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-05 00:50:02.805663 | orchestrator | Monday 05 January 2026 00:45:18 +0000 (0:00:03.476) 0:00:22.226 ******** 2026-01-05 00:50:02.805670 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.805677 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.805684 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.805691 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.805699 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.805706 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.805713 | orchestrator | 2026-01-05 00:50:02.805720 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-05 00:50:02.805727 | orchestrator | Monday 05 January 2026 00:45:20 +0000 (0:00:01.703) 0:00:23.930 ******** 2026-01-05 00:50:02.805735 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-05 00:50:02.805742 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-05 00:50:02.805749 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.805757 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-05 00:50:02.805764 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-05 00:50:02.805771 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.805778 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-05 00:50:02.805785 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-05 00:50:02.805792 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.805799 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-05 00:50:02.805807 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-05 00:50:02.805814 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-05 00:50:02.805821 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-05 00:50:02.805829 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.805836 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.805843 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-05 00:50:02.805850 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-05 00:50:02.805857 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.805864 | orchestrator | 2026-01-05 00:50:02.805872 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-05 00:50:02.805887 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:01.664) 0:00:25.594 ******** 2026-01-05 00:50:02.805900 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.805907 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.805914 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.805922 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.805929 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.805936 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.805943 | orchestrator | 2026-01-05 00:50:02.805950 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-05 00:50:02.805988 | orchestrator | Monday 05 January 2026 00:45:23 +0000 (0:00:00.832) 0:00:26.427 ******** 2026-01-05 00:50:02.805997 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.806004 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.806011 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.806083 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.806091 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.806098 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.806105 | orchestrator | 2026-01-05 00:50:02.806258 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-05 00:50:02.806270 | orchestrator | 2026-01-05 00:50:02.806278 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-05 00:50:02.806286 | orchestrator | Monday 05 January 2026 00:45:25 +0000 (0:00:02.730) 0:00:29.157 ******** 2026-01-05 00:50:02.806293 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.806300 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.806307 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.806315 | orchestrator | 2026-01-05 00:50:02.806322 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-05 00:50:02.806329 | orchestrator | Monday 05 January 2026 00:45:29 +0000 (0:00:03.249) 0:00:32.407 ******** 2026-01-05 00:50:02.806336 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.806344 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.806351 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.806358 | orchestrator | 2026-01-05 00:50:02.806365 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-05 00:50:02.806372 | orchestrator | Monday 05 January 2026 00:45:30 +0000 (0:00:01.296) 0:00:33.703 ******** 2026-01-05 00:50:02.806379 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.806387 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.806394 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.806401 | orchestrator | 2026-01-05 00:50:02.806408 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-05 00:50:02.806416 | orchestrator | Monday 05 January 2026 00:45:31 +0000 (0:00:01.001) 0:00:34.705 ******** 2026-01-05 00:50:02.806423 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.806440 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.806453 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.806460 | orchestrator | 2026-01-05 00:50:02.806468 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-05 00:50:02.806475 | orchestrator | Monday 05 January 2026 00:45:32 +0000 (0:00:00.748) 0:00:35.453 ******** 2026-01-05 00:50:02.806482 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.806490 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.806497 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.806504 | orchestrator | 2026-01-05 00:50:02.806511 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-05 00:50:02.806518 | orchestrator | Monday 05 January 2026 00:45:32 +0000 (0:00:00.395) 0:00:35.851 ******** 2026-01-05 00:50:02.806526 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.806533 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.806540 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.806547 | orchestrator | 2026-01-05 00:50:02.806554 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-05 00:50:02.806561 | orchestrator | Monday 05 January 2026 00:45:33 +0000 (0:00:01.370) 0:00:37.222 ******** 2026-01-05 00:50:02.806577 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.806584 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.806591 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.806598 | orchestrator | 2026-01-05 00:50:02.806605 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-05 00:50:02.806613 | orchestrator | Monday 05 January 2026 00:45:35 +0000 (0:00:01.580) 0:00:38.802 ******** 2026-01-05 00:50:02.806620 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:50:02.806628 | orchestrator | 2026-01-05 00:50:02.806635 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-05 00:50:02.806642 | orchestrator | Monday 05 January 2026 00:45:36 +0000 (0:00:00.674) 0:00:39.477 ******** 2026-01-05 00:50:02.806694 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.806701 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.806707 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.806714 | orchestrator | 2026-01-05 00:50:02.806738 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-05 00:50:02.806745 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:02.749) 0:00:42.227 ******** 2026-01-05 00:50:02.806752 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.806759 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.806765 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.806780 | orchestrator | 2026-01-05 00:50:02.806787 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-05 00:50:02.806794 | orchestrator | Monday 05 January 2026 00:45:39 +0000 (0:00:00.708) 0:00:42.936 ******** 2026-01-05 00:50:02.806801 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.806808 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.806814 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.806821 | orchestrator | 2026-01-05 00:50:02.806827 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-05 00:50:02.806834 | orchestrator | Monday 05 January 2026 00:45:40 +0000 (0:00:00.897) 0:00:43.834 ******** 2026-01-05 00:50:02.806841 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.806847 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.806854 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.806860 | orchestrator | 2026-01-05 00:50:02.806867 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-05 00:50:02.806885 | orchestrator | Monday 05 January 2026 00:45:42 +0000 (0:00:01.667) 0:00:45.501 ******** 2026-01-05 00:50:02.806892 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.806899 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.806905 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.806912 | orchestrator | 2026-01-05 00:50:02.806919 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-05 00:50:02.806926 | orchestrator | Monday 05 January 2026 00:45:43 +0000 (0:00:01.346) 0:00:46.847 ******** 2026-01-05 00:50:02.806933 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.806939 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.806946 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.806952 | orchestrator | 2026-01-05 00:50:02.806959 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-05 00:50:02.806965 | orchestrator | Monday 05 January 2026 00:45:44 +0000 (0:00:00.595) 0:00:47.443 ******** 2026-01-05 00:50:02.806980 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.806990 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.807001 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.807012 | orchestrator | 2026-01-05 00:50:02.807023 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-05 00:50:02.807035 | orchestrator | Monday 05 January 2026 00:45:46 +0000 (0:00:02.405) 0:00:49.849 ******** 2026-01-05 00:50:02.807043 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.807050 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.807063 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.807070 | orchestrator | 2026-01-05 00:50:02.807076 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-05 00:50:02.807083 | orchestrator | Monday 05 January 2026 00:45:49 +0000 (0:00:02.809) 0:00:52.658 ******** 2026-01-05 00:50:02.807090 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.807096 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.807103 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.807131 | orchestrator | 2026-01-05 00:50:02.807138 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-05 00:50:02.807145 | orchestrator | Monday 05 January 2026 00:45:50 +0000 (0:00:01.660) 0:00:54.319 ******** 2026-01-05 00:50:02.807152 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:50:02.807161 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:50:02.807172 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:50:02.807179 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:50:02.807185 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:50:02.807192 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:50:02.807199 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:50:02.807205 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:50:02.807212 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:50:02.807219 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:50:02.807226 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:50:02.807232 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:50:02.807239 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.807246 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.807252 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.807259 | orchestrator | 2026-01-05 00:50:02.807266 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-05 00:50:02.807272 | orchestrator | Monday 05 January 2026 00:46:34 +0000 (0:00:43.296) 0:01:37.616 ******** 2026-01-05 00:50:02.807279 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.807286 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.807292 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.807299 | orchestrator | 2026-01-05 00:50:02.807306 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-05 00:50:02.807312 | orchestrator | Monday 05 January 2026 00:46:34 +0000 (0:00:00.275) 0:01:37.891 ******** 2026-01-05 00:50:02.807319 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.807325 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.807332 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.807339 | orchestrator | 2026-01-05 00:50:02.807345 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-05 00:50:02.807352 | orchestrator | Monday 05 January 2026 00:46:35 +0000 (0:00:00.999) 0:01:38.890 ******** 2026-01-05 00:50:02.807363 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.807370 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.807377 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.807384 | orchestrator | 2026-01-05 00:50:02.807395 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-05 00:50:02.807402 | orchestrator | Monday 05 January 2026 00:46:36 +0000 (0:00:01.180) 0:01:40.070 ******** 2026-01-05 00:50:02.807409 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.807416 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.807423 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.807429 | orchestrator | 2026-01-05 00:50:02.807436 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-05 00:50:02.807443 | orchestrator | Monday 05 January 2026 00:47:30 +0000 (0:00:53.600) 0:02:33.671 ******** 2026-01-05 00:50:02.807450 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.807457 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.807463 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.807470 | orchestrator | 2026-01-05 00:50:02.807477 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-05 00:50:02.807486 | orchestrator | Monday 05 January 2026 00:47:30 +0000 (0:00:00.660) 0:02:34.331 ******** 2026-01-05 00:50:02.807498 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.807505 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.807512 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.807518 | orchestrator | 2026-01-05 00:50:02.807525 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-05 00:50:02.807531 | orchestrator | Monday 05 January 2026 00:47:31 +0000 (0:00:00.673) 0:02:35.005 ******** 2026-01-05 00:50:02.807538 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.807545 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.807552 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.807559 | orchestrator | 2026-01-05 00:50:02.807566 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-05 00:50:02.807572 | orchestrator | Monday 05 January 2026 00:47:32 +0000 (0:00:00.649) 0:02:35.654 ******** 2026-01-05 00:50:02.807579 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.807585 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.807592 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.807598 | orchestrator | 2026-01-05 00:50:02.807605 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-05 00:50:02.807611 | orchestrator | Monday 05 January 2026 00:47:33 +0000 (0:00:00.853) 0:02:36.508 ******** 2026-01-05 00:50:02.807618 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.807625 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.807631 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.807638 | orchestrator | 2026-01-05 00:50:02.807644 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-05 00:50:02.807651 | orchestrator | Monday 05 January 2026 00:47:33 +0000 (0:00:00.303) 0:02:36.811 ******** 2026-01-05 00:50:02.807661 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.807668 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.807675 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.807682 | orchestrator | 2026-01-05 00:50:02.807688 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-05 00:50:02.807695 | orchestrator | Monday 05 January 2026 00:47:34 +0000 (0:00:00.649) 0:02:37.461 ******** 2026-01-05 00:50:02.807702 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.807708 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.807715 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.807721 | orchestrator | 2026-01-05 00:50:02.807727 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-05 00:50:02.807734 | orchestrator | Monday 05 January 2026 00:47:34 +0000 (0:00:00.627) 0:02:38.088 ******** 2026-01-05 00:50:02.807741 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.807752 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.807759 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.807765 | orchestrator | 2026-01-05 00:50:02.807772 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-05 00:50:02.807778 | orchestrator | Monday 05 January 2026 00:47:35 +0000 (0:00:01.144) 0:02:39.232 ******** 2026-01-05 00:50:02.807785 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:02.807791 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:02.807798 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:02.807804 | orchestrator | 2026-01-05 00:50:02.807811 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-05 00:50:02.807818 | orchestrator | Monday 05 January 2026 00:47:36 +0000 (0:00:00.814) 0:02:40.047 ******** 2026-01-05 00:50:02.807824 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.807831 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.807837 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.807844 | orchestrator | 2026-01-05 00:50:02.807851 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-05 00:50:02.807857 | orchestrator | Monday 05 January 2026 00:47:36 +0000 (0:00:00.305) 0:02:40.352 ******** 2026-01-05 00:50:02.807864 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.807870 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.807877 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.807883 | orchestrator | 2026-01-05 00:50:02.807890 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-05 00:50:02.807896 | orchestrator | Monday 05 January 2026 00:47:37 +0000 (0:00:00.333) 0:02:40.686 ******** 2026-01-05 00:50:02.807903 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.807910 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.807916 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.807923 | orchestrator | 2026-01-05 00:50:02.807929 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-05 00:50:02.807936 | orchestrator | Monday 05 January 2026 00:47:38 +0000 (0:00:00.939) 0:02:41.626 ******** 2026-01-05 00:50:02.807942 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.807949 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.807955 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.807962 | orchestrator | 2026-01-05 00:50:02.807969 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-05 00:50:02.807975 | orchestrator | Monday 05 January 2026 00:47:38 +0000 (0:00:00.606) 0:02:42.233 ******** 2026-01-05 00:50:02.807982 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:50:02.807993 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:50:02.808001 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:50:02.808007 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:50:02.808014 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:50:02.808021 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:50:02.808027 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:50:02.808034 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:50:02.808040 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:50:02.808047 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-05 00:50:02.808054 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:50:02.808065 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:50:02.808072 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-05 00:50:02.808079 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:50:02.808086 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:50:02.808092 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:50:02.808099 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:50:02.808105 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:50:02.808133 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:50:02.808144 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:50:02.808151 | orchestrator | 2026-01-05 00:50:02.808157 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-05 00:50:02.808164 | orchestrator | 2026-01-05 00:50:02.808171 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-05 00:50:02.808178 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:02.868) 0:02:45.101 ******** 2026-01-05 00:50:02.808184 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:02.808191 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:02.808197 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:02.808204 | orchestrator | 2026-01-05 00:50:02.808210 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-05 00:50:02.808217 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:00.666) 0:02:45.768 ******** 2026-01-05 00:50:02.808224 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:02.808230 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:02.808237 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:02.808243 | orchestrator | 2026-01-05 00:50:02.808250 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-05 00:50:02.808257 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:00.598) 0:02:46.367 ******** 2026-01-05 00:50:02.808263 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:02.808270 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:02.808276 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:02.808283 | orchestrator | 2026-01-05 00:50:02.808290 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-05 00:50:02.808297 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.335) 0:02:46.702 ******** 2026-01-05 00:50:02.808303 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:50:02.808310 | orchestrator | 2026-01-05 00:50:02.808316 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-05 00:50:02.808323 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.633) 0:02:47.336 ******** 2026-01-05 00:50:02.808330 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.808336 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.808343 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.808349 | orchestrator | 2026-01-05 00:50:02.808356 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-05 00:50:02.808362 | orchestrator | Monday 05 January 2026 00:47:44 +0000 (0:00:00.287) 0:02:47.623 ******** 2026-01-05 00:50:02.808369 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.808376 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.808382 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.808389 | orchestrator | 2026-01-05 00:50:02.808395 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-05 00:50:02.808402 | orchestrator | Monday 05 January 2026 00:47:44 +0000 (0:00:00.366) 0:02:47.990 ******** 2026-01-05 00:50:02.808418 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.808425 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.808431 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.808438 | orchestrator | 2026-01-05 00:50:02.808445 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-05 00:50:02.808452 | orchestrator | Monday 05 January 2026 00:47:44 +0000 (0:00:00.308) 0:02:48.298 ******** 2026-01-05 00:50:02.808458 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:02.808465 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:02.808472 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:02.808479 | orchestrator | 2026-01-05 00:50:02.808490 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-05 00:50:02.808498 | orchestrator | Monday 05 January 2026 00:47:45 +0000 (0:00:00.946) 0:02:49.245 ******** 2026-01-05 00:50:02.808504 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:02.808511 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:02.808518 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:02.808524 | orchestrator | 2026-01-05 00:50:02.808531 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-05 00:50:02.808538 | orchestrator | Monday 05 January 2026 00:47:47 +0000 (0:00:01.427) 0:02:50.672 ******** 2026-01-05 00:50:02.808544 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:02.808551 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:02.808557 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:02.808564 | orchestrator | 2026-01-05 00:50:02.808571 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-05 00:50:02.808577 | orchestrator | Monday 05 January 2026 00:47:48 +0000 (0:00:01.555) 0:02:52.227 ******** 2026-01-05 00:50:02.808584 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:50:02.808590 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:50:02.808597 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:50:02.808604 | orchestrator | 2026-01-05 00:50:02.808610 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-05 00:50:02.808617 | orchestrator | 2026-01-05 00:50:02.808623 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-05 00:50:02.808630 | orchestrator | Monday 05 January 2026 00:47:59 +0000 (0:00:10.803) 0:03:03.031 ******** 2026-01-05 00:50:02.808637 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:02.808643 | orchestrator | 2026-01-05 00:50:02.808650 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-05 00:50:02.808657 | orchestrator | Monday 05 January 2026 00:48:00 +0000 (0:00:01.345) 0:03:04.377 ******** 2026-01-05 00:50:02.808663 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:02.808670 | orchestrator | 2026-01-05 00:50:02.808677 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:50:02.808683 | orchestrator | Monday 05 January 2026 00:48:01 +0000 (0:00:00.504) 0:03:04.881 ******** 2026-01-05 00:50:02.808690 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:50:02.808697 | orchestrator | 2026-01-05 00:50:02.808703 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:50:02.808710 | orchestrator | Monday 05 January 2026 00:48:02 +0000 (0:00:00.616) 0:03:05.498 ******** 2026-01-05 00:50:02.808717 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:02.808723 | orchestrator | 2026-01-05 00:50:02.808734 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-05 00:50:02.808740 | orchestrator | Monday 05 January 2026 00:48:03 +0000 (0:00:01.387) 0:03:06.886 ******** 2026-01-05 00:50:02.808747 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:02.808754 | orchestrator | 2026-01-05 00:50:02.808760 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-05 00:50:02.808767 | orchestrator | Monday 05 January 2026 00:48:04 +0000 (0:00:01.101) 0:03:07.987 ******** 2026-01-05 00:50:02.808774 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:50:02.808786 | orchestrator | 2026-01-05 00:50:02.808793 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-05 00:50:02.808800 | orchestrator | Monday 05 January 2026 00:48:06 +0000 (0:00:01.926) 0:03:09.914 ******** 2026-01-05 00:50:02.808806 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:50:02.808813 | orchestrator | 2026-01-05 00:50:02.808819 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-05 00:50:02.808826 | orchestrator | Monday 05 January 2026 00:48:07 +0000 (0:00:01.115) 0:03:11.029 ******** 2026-01-05 00:50:02.808833 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:02.808839 | orchestrator | 2026-01-05 00:50:02.808846 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-05 00:50:02.808853 | orchestrator | Monday 05 January 2026 00:48:08 +0000 (0:00:00.607) 0:03:11.636 ******** 2026-01-05 00:50:02.808859 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:02.808866 | orchestrator | 2026-01-05 00:50:02.808872 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-05 00:50:02.808879 | orchestrator | 2026-01-05 00:50:02.808886 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-05 00:50:02.808892 | orchestrator | Monday 05 January 2026 00:48:09 +0000 (0:00:00.846) 0:03:12.483 ******** 2026-01-05 00:50:02.808899 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:02.808906 | orchestrator | 2026-01-05 00:50:02.808912 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-05 00:50:02.808919 | orchestrator | Monday 05 January 2026 00:48:09 +0000 (0:00:00.177) 0:03:12.661 ******** 2026-01-05 00:50:02.808925 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:50:02.808932 | orchestrator | 2026-01-05 00:50:02.808939 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-05 00:50:02.808946 | orchestrator | Monday 05 January 2026 00:48:09 +0000 (0:00:00.272) 0:03:12.933 ******** 2026-01-05 00:50:02.808952 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:02.808959 | orchestrator | 2026-01-05 00:50:02.808966 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-05 00:50:02.808973 | orchestrator | Monday 05 January 2026 00:48:10 +0000 (0:00:01.186) 0:03:14.120 ******** 2026-01-05 00:50:02.808980 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:02.808986 | orchestrator | 2026-01-05 00:50:02.808993 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-05 00:50:02.808999 | orchestrator | Monday 05 January 2026 00:48:13 +0000 (0:00:02.304) 0:03:16.425 ******** 2026-01-05 00:50:02.809006 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:02.809013 | orchestrator | 2026-01-05 00:50:02.809019 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-05 00:50:02.809026 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:01.075) 0:03:17.501 ******** 2026-01-05 00:50:02.809033 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:02.809039 | orchestrator | 2026-01-05 00:50:02.809050 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-05 00:50:02.809057 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:00.549) 0:03:18.050 ******** 2026-01-05 00:50:02.809064 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:02.809071 | orchestrator | 2026-01-05 00:50:02.809078 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-05 00:50:02.809085 | orchestrator | Monday 05 January 2026 00:48:23 +0000 (0:00:09.006) 0:03:27.056 ******** 2026-01-05 00:50:02.809091 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:02.809098 | orchestrator | 2026-01-05 00:50:02.809105 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-05 00:50:02.809126 | orchestrator | Monday 05 January 2026 00:48:40 +0000 (0:00:17.322) 0:03:44.378 ******** 2026-01-05 00:50:02.809141 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:02.809148 | orchestrator | 2026-01-05 00:50:02.809163 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-05 00:50:02.809175 | orchestrator | 2026-01-05 00:50:02.809182 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-05 00:50:02.809189 | orchestrator | Monday 05 January 2026 00:48:41 +0000 (0:00:00.779) 0:03:45.158 ******** 2026-01-05 00:50:02.809195 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.809202 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.809209 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.809216 | orchestrator | 2026-01-05 00:50:02.809222 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-05 00:50:02.809229 | orchestrator | Monday 05 January 2026 00:48:42 +0000 (0:00:00.404) 0:03:45.562 ******** 2026-01-05 00:50:02.809235 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.809242 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.809248 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.809255 | orchestrator | 2026-01-05 00:50:02.809262 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-05 00:50:02.809268 | orchestrator | Monday 05 January 2026 00:48:42 +0000 (0:00:00.381) 0:03:45.943 ******** 2026-01-05 00:50:02.809275 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-01-05 00:50:02.809282 | orchestrator | 2026-01-05 00:50:02.809288 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-05 00:50:02.809295 | orchestrator | Monday 05 January 2026 00:48:43 +0000 (0:00:00.805) 0:03:46.749 ******** 2026-01-05 00:50:02.809302 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:50:02.809309 | orchestrator | 2026-01-05 00:50:02.809319 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-05 00:50:02.809325 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:00.742) 0:03:47.492 ******** 2026-01-05 00:50:02.809332 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:50:02.809338 | orchestrator | 2026-01-05 00:50:02.809345 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-05 00:50:02.809352 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:00.783) 0:03:48.275 ******** 2026-01-05 00:50:02.809358 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.809365 | orchestrator | 2026-01-05 00:50:02.809371 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-05 00:50:02.809378 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:00.096) 0:03:48.372 ******** 2026-01-05 00:50:02.809385 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:50:02.809391 | orchestrator | 2026-01-05 00:50:02.809398 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-05 00:50:02.809404 | orchestrator | Monday 05 January 2026 00:48:45 +0000 (0:00:00.941) 0:03:49.313 ******** 2026-01-05 00:50:02.809411 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.809417 | orchestrator | 2026-01-05 00:50:02.809424 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-05 00:50:02.809431 | orchestrator | Monday 05 January 2026 00:48:46 +0000 (0:00:00.121) 0:03:49.434 ******** 2026-01-05 00:50:02.809438 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.809444 | orchestrator | 2026-01-05 00:50:02.809451 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-05 00:50:02.809458 | orchestrator | Monday 05 January 2026 00:48:46 +0000 (0:00:00.110) 0:03:49.544 ******** 2026-01-05 00:50:02.809464 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.809471 | orchestrator | 2026-01-05 00:50:02.809478 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-05 00:50:02.809484 | orchestrator | Monday 05 January 2026 00:48:46 +0000 (0:00:00.196) 0:03:49.741 ******** 2026-01-05 00:50:02.809491 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.809497 | orchestrator | 2026-01-05 00:50:02.809504 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-05 00:50:02.809510 | orchestrator | Monday 05 January 2026 00:48:46 +0000 (0:00:00.127) 0:03:49.869 ******** 2026-01-05 00:50:02.809570 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:50:02.809578 | orchestrator | 2026-01-05 00:50:02.809585 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-05 00:50:02.809592 | orchestrator | Monday 05 January 2026 00:48:50 +0000 (0:00:04.486) 0:03:54.356 ******** 2026-01-05 00:50:02.809599 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-05 00:50:02.809606 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-05 00:50:02.809612 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-05 00:50:02.809619 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-05 00:50:02.809626 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-05 00:50:02.809632 | orchestrator | 2026-01-05 00:50:02.809639 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-05 00:50:02.809646 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:42.154) 0:04:36.510 ******** 2026-01-05 00:50:02.809658 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:50:02.809664 | orchestrator | 2026-01-05 00:50:02.809671 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-05 00:50:02.809678 | orchestrator | Monday 05 January 2026 00:49:34 +0000 (0:00:01.158) 0:04:37.669 ******** 2026-01-05 00:50:02.809685 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:50:02.809691 | orchestrator | 2026-01-05 00:50:02.809698 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-05 00:50:02.809705 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:01.535) 0:04:39.205 ******** 2026-01-05 00:50:02.809712 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:50:02.809718 | orchestrator | 2026-01-05 00:50:02.809725 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-05 00:50:02.809732 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:00.948) 0:04:40.154 ******** 2026-01-05 00:50:02.809738 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.809745 | orchestrator | 2026-01-05 00:50:02.809752 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-05 00:50:02.809758 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:00.095) 0:04:40.250 ******** 2026-01-05 00:50:02.809765 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-05 00:50:02.809772 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-05 00:50:02.809779 | orchestrator | 2026-01-05 00:50:02.809785 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-05 00:50:02.809792 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:01.666) 0:04:41.916 ******** 2026-01-05 00:50:02.809798 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.809805 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.809812 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.809818 | orchestrator | 2026-01-05 00:50:02.809826 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-05 00:50:02.809832 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:00.423) 0:04:42.340 ******** 2026-01-05 00:50:02.809839 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.809846 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.809852 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.809859 | orchestrator | 2026-01-05 00:50:02.809866 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-05 00:50:02.809872 | orchestrator | 2026-01-05 00:50:02.809879 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-05 00:50:02.809890 | orchestrator | Monday 05 January 2026 00:49:40 +0000 (0:00:01.420) 0:04:43.760 ******** 2026-01-05 00:50:02.809897 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:02.809909 | orchestrator | 2026-01-05 00:50:02.809916 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-05 00:50:02.809922 | orchestrator | Monday 05 January 2026 00:49:40 +0000 (0:00:00.204) 0:04:43.964 ******** 2026-01-05 00:50:02.809929 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:50:02.809936 | orchestrator | 2026-01-05 00:50:02.809943 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-05 00:50:02.809950 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:00.480) 0:04:44.444 ******** 2026-01-05 00:50:02.809957 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:02.809963 | orchestrator | 2026-01-05 00:50:02.809970 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-05 00:50:02.809977 | orchestrator | 2026-01-05 00:50:02.809984 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-05 00:50:02.809990 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:05.861) 0:04:50.306 ******** 2026-01-05 00:50:02.809997 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:50:02.810004 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:50:02.810010 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:50:02.810051 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:02.810058 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:02.810065 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:02.810072 | orchestrator | 2026-01-05 00:50:02.810078 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-05 00:50:02.810085 | orchestrator | Monday 05 January 2026 00:49:47 +0000 (0:00:00.758) 0:04:51.064 ******** 2026-01-05 00:50:02.810092 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:50:02.810098 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:50:02.810105 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:50:02.810150 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:50:02.810157 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:50:02.810163 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:50:02.810170 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:50:02.810176 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:50:02.810183 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:50:02.810190 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:50:02.810196 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:50:02.810203 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:50:02.810216 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:50:02.810223 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:50:02.810229 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:50:02.810236 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:50:02.810243 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:50:02.810249 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:50:02.810256 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:50:02.810262 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:50:02.810274 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:50:02.810281 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:50:02.810287 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:50:02.810294 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:50:02.810301 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:50:02.810307 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:50:02.810314 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:50:02.810321 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:50:02.810327 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:50:02.810334 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:50:02.810341 | orchestrator | 2026-01-05 00:50:02.810348 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-05 00:50:02.810354 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:10.579) 0:05:01.644 ******** 2026-01-05 00:50:02.810361 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.810372 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.810379 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.810385 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.810392 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.810399 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.810406 | orchestrator | 2026-01-05 00:50:02.810412 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-05 00:50:02.810419 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:00.692) 0:05:02.336 ******** 2026-01-05 00:50:02.810426 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:02.810432 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:02.810439 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:02.810445 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:02.810452 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:02.810459 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:02.810465 | orchestrator | 2026-01-05 00:50:02.810472 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:50:02.810479 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:50:02.810488 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-05 00:50:02.810495 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 00:50:02.810502 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 00:50:02.810509 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:50:02.810516 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:50:02.810523 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:50:02.810529 | orchestrator | 2026-01-05 00:50:02.810536 | orchestrator | 2026-01-05 00:50:02.810543 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:50:02.810554 | orchestrator | Monday 05 January 2026 00:49:59 +0000 (0:00:00.513) 0:05:02.849 ******** 2026-01-05 00:50:02.810561 | orchestrator | =============================================================================== 2026-01-05 00:50:02.810568 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 53.60s 2026-01-05 00:50:02.810574 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.30s 2026-01-05 00:50:02.810581 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.15s 2026-01-05 00:50:02.810591 | orchestrator | kubectl : Install required packages ------------------------------------ 17.32s 2026-01-05 00:50:02.810598 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.80s 2026-01-05 00:50:02.810605 | orchestrator | Manage labels ---------------------------------------------------------- 10.58s 2026-01-05 00:50:02.810612 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.01s 2026-01-05 00:50:02.810618 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.07s 2026-01-05 00:50:02.810625 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.86s 2026-01-05 00:50:02.810632 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.49s 2026-01-05 00:50:02.810638 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.48s 2026-01-05 00:50:02.810645 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 3.25s 2026-01-05 00:50:02.810652 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.87s 2026-01-05 00:50:02.810658 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.81s 2026-01-05 00:50:02.810665 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.75s 2026-01-05 00:50:02.810672 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.73s 2026-01-05 00:50:02.810679 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.41s 2026-01-05 00:50:02.810685 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.36s 2026-01-05 00:50:02.810692 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.30s 2026-01-05 00:50:02.810699 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.93s 2026-01-05 00:50:02.810705 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:02.810712 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:02.810831 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 0aadcef0-50a7-45d1-84b1-c5c50c32a920 is in state STARTED 2026-01-05 00:50:02.810844 | orchestrator | 2026-01-05 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:05.873694 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task eb26cbd4-d415-40b3-9e71-da21380972a9 is in state STARTED 2026-01-05 00:50:05.873811 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:05.873832 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:05.873846 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:05.873859 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:05.873873 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 0aadcef0-50a7-45d1-84b1-c5c50c32a920 is in state STARTED 2026-01-05 00:50:05.873887 | orchestrator | 2026-01-05 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:08.913847 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task eb26cbd4-d415-40b3-9e71-da21380972a9 is in state STARTED 2026-01-05 00:50:08.913970 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:08.914644 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:08.915298 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:08.916874 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:08.917549 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 0aadcef0-50a7-45d1-84b1-c5c50c32a920 is in state SUCCESS 2026-01-05 00:50:08.917585 | orchestrator | 2026-01-05 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:11.946646 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task eb26cbd4-d415-40b3-9e71-da21380972a9 is in state STARTED 2026-01-05 00:50:11.951375 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:11.955852 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:11.958732 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:11.960577 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:11.960757 | orchestrator | 2026-01-05 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:14.994690 | orchestrator | 2026-01-05 00:50:14 | INFO  | Task eb26cbd4-d415-40b3-9e71-da21380972a9 is in state SUCCESS 2026-01-05 00:50:14.998381 | orchestrator | 2026-01-05 00:50:14 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:15.001964 | orchestrator | 2026-01-05 00:50:15 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:15.003530 | orchestrator | 2026-01-05 00:50:15 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:15.005512 | orchestrator | 2026-01-05 00:50:15 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:15.005645 | orchestrator | 2026-01-05 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:18.044853 | orchestrator | 2026-01-05 00:50:18 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:18.045046 | orchestrator | 2026-01-05 00:50:18 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:18.049437 | orchestrator | 2026-01-05 00:50:18 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:18.050268 | orchestrator | 2026-01-05 00:50:18 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:18.050340 | orchestrator | 2026-01-05 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:21.083412 | orchestrator | 2026-01-05 00:50:21 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:21.085810 | orchestrator | 2026-01-05 00:50:21 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:21.086766 | orchestrator | 2026-01-05 00:50:21 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:21.088528 | orchestrator | 2026-01-05 00:50:21 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:21.088629 | orchestrator | 2026-01-05 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:24.114526 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:24.114632 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:24.115269 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:24.115859 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:24.115890 | orchestrator | 2026-01-05 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:27.144565 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:27.144855 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:27.146394 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:27.147018 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:27.147068 | orchestrator | 2026-01-05 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:30.173537 | orchestrator | 2026-01-05 00:50:30 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:30.174179 | orchestrator | 2026-01-05 00:50:30 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:30.174760 | orchestrator | 2026-01-05 00:50:30 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:30.175666 | orchestrator | 2026-01-05 00:50:30 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:30.175703 | orchestrator | 2026-01-05 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:33.210296 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:33.210459 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:33.213908 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:33.214508 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:33.214540 | orchestrator | 2026-01-05 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:36.264531 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state STARTED 2026-01-05 00:50:36.270392 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:36.273915 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:36.276860 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:36.276929 | orchestrator | 2026-01-05 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:39.316810 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 7fedc996-b018-49fa-87d4-545803e26db5 is in state SUCCESS 2026-01-05 00:50:39.318181 | orchestrator | 2026-01-05 00:50:39.318248 | orchestrator | 2026-01-05 00:50:39.318256 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-05 00:50:39.318281 | orchestrator | 2026-01-05 00:50:39.318286 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:50:39.318293 | orchestrator | Monday 05 January 2026 00:50:04 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-01-05 00:50:39.318302 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:50:39.318310 | orchestrator | 2026-01-05 00:50:39.318316 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:50:39.318322 | orchestrator | Monday 05 January 2026 00:50:05 +0000 (0:00:00.808) 0:00:00.975 ******** 2026-01-05 00:50:39.318328 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:39.318337 | orchestrator | 2026-01-05 00:50:39.318345 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-05 00:50:39.318353 | orchestrator | Monday 05 January 2026 00:50:07 +0000 (0:00:01.599) 0:00:02.575 ******** 2026-01-05 00:50:39.318362 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:39.318370 | orchestrator | 2026-01-05 00:50:39.318389 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:50:39.318403 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:50:39.318411 | orchestrator | 2026-01-05 00:50:39.318417 | orchestrator | 2026-01-05 00:50:39.318423 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:50:39.318428 | orchestrator | Monday 05 January 2026 00:50:07 +0000 (0:00:00.594) 0:00:03.170 ******** 2026-01-05 00:50:39.318435 | orchestrator | =============================================================================== 2026-01-05 00:50:39.318441 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.60s 2026-01-05 00:50:39.318447 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2026-01-05 00:50:39.318453 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.59s 2026-01-05 00:50:39.318459 | orchestrator | 2026-01-05 00:50:39.318466 | orchestrator | 2026-01-05 00:50:39.318472 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-05 00:50:39.318478 | orchestrator | 2026-01-05 00:50:39.318484 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-05 00:50:39.318488 | orchestrator | Monday 05 January 2026 00:50:04 +0000 (0:00:00.152) 0:00:00.152 ******** 2026-01-05 00:50:39.318492 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:39.318497 | orchestrator | 2026-01-05 00:50:39.318501 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-05 00:50:39.318505 | orchestrator | Monday 05 January 2026 00:50:05 +0000 (0:00:00.713) 0:00:00.866 ******** 2026-01-05 00:50:39.318509 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:39.318512 | orchestrator | 2026-01-05 00:50:39.318517 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:50:39.318520 | orchestrator | Monday 05 January 2026 00:50:06 +0000 (0:00:00.835) 0:00:01.702 ******** 2026-01-05 00:50:39.318525 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:50:39.318529 | orchestrator | 2026-01-05 00:50:39.318535 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:50:39.318542 | orchestrator | Monday 05 January 2026 00:50:07 +0000 (0:00:00.999) 0:00:02.702 ******** 2026-01-05 00:50:39.318548 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:39.318553 | orchestrator | 2026-01-05 00:50:39.318560 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-05 00:50:39.318566 | orchestrator | Monday 05 January 2026 00:50:09 +0000 (0:00:01.953) 0:00:04.655 ******** 2026-01-05 00:50:39.318572 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:39.318578 | orchestrator | 2026-01-05 00:50:39.318585 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-05 00:50:39.318591 | orchestrator | Monday 05 January 2026 00:50:09 +0000 (0:00:00.486) 0:00:05.142 ******** 2026-01-05 00:50:39.318598 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:50:39.318613 | orchestrator | 2026-01-05 00:50:39.318619 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-05 00:50:39.318629 | orchestrator | Monday 05 January 2026 00:50:11 +0000 (0:00:01.506) 0:00:06.649 ******** 2026-01-05 00:50:39.318636 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:50:39.318642 | orchestrator | 2026-01-05 00:50:39.318648 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-05 00:50:39.318735 | orchestrator | Monday 05 January 2026 00:50:11 +0000 (0:00:00.756) 0:00:07.405 ******** 2026-01-05 00:50:39.318751 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:39.318756 | orchestrator | 2026-01-05 00:50:39.318760 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-05 00:50:39.318765 | orchestrator | Monday 05 January 2026 00:50:12 +0000 (0:00:00.404) 0:00:07.810 ******** 2026-01-05 00:50:39.318770 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:39.318774 | orchestrator | 2026-01-05 00:50:39.318779 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:50:39.318784 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:50:39.318789 | orchestrator | 2026-01-05 00:50:39.318793 | orchestrator | 2026-01-05 00:50:39.318798 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:50:39.318802 | orchestrator | Monday 05 January 2026 00:50:12 +0000 (0:00:00.325) 0:00:08.136 ******** 2026-01-05 00:50:39.318808 | orchestrator | =============================================================================== 2026-01-05 00:50:39.318812 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.95s 2026-01-05 00:50:39.318817 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2026-01-05 00:50:39.318823 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.00s 2026-01-05 00:50:39.318851 | orchestrator | Create .kube directory -------------------------------------------------- 0.84s 2026-01-05 00:50:39.318858 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.76s 2026-01-05 00:50:39.318864 | orchestrator | Get home directory of operator user ------------------------------------- 0.71s 2026-01-05 00:50:39.318871 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.49s 2026-01-05 00:50:39.318877 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.40s 2026-01-05 00:50:39.318884 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.33s 2026-01-05 00:50:39.318891 | orchestrator | 2026-01-05 00:50:39.318897 | orchestrator | 2026-01-05 00:50:39.318904 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-05 00:50:39.318911 | orchestrator | 2026-01-05 00:50:39.318916 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-05 00:50:39.318921 | orchestrator | Monday 05 January 2026 00:48:08 +0000 (0:00:00.143) 0:00:00.143 ******** 2026-01-05 00:50:39.318925 | orchestrator | ok: [localhost] => { 2026-01-05 00:50:39.318931 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-05 00:50:39.318936 | orchestrator | } 2026-01-05 00:50:39.318941 | orchestrator | 2026-01-05 00:50:39.318949 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-05 00:50:39.318953 | orchestrator | Monday 05 January 2026 00:48:09 +0000 (0:00:00.135) 0:00:00.279 ******** 2026-01-05 00:50:39.318958 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-05 00:50:39.318964 | orchestrator | ...ignoring 2026-01-05 00:50:39.318968 | orchestrator | 2026-01-05 00:50:39.318972 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-05 00:50:39.318976 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:05.282) 0:00:05.562 ******** 2026-01-05 00:50:39.318985 | orchestrator | skipping: [localhost] 2026-01-05 00:50:39.318989 | orchestrator | 2026-01-05 00:50:39.318993 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-05 00:50:39.318997 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:00.081) 0:00:05.643 ******** 2026-01-05 00:50:39.319001 | orchestrator | ok: [localhost] 2026-01-05 00:50:39.319005 | orchestrator | 2026-01-05 00:50:39.319009 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:50:39.319012 | orchestrator | 2026-01-05 00:50:39.319016 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:50:39.319020 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:00.203) 0:00:05.847 ******** 2026-01-05 00:50:39.319024 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:39.319028 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:39.319031 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:39.319035 | orchestrator | 2026-01-05 00:50:39.319039 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:50:39.319043 | orchestrator | Monday 05 January 2026 00:48:15 +0000 (0:00:00.414) 0:00:06.261 ******** 2026-01-05 00:50:39.319046 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-05 00:50:39.319052 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-05 00:50:39.319058 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-05 00:50:39.319088 | orchestrator | 2026-01-05 00:50:39.319095 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-05 00:50:39.319101 | orchestrator | 2026-01-05 00:50:39.319107 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 00:50:39.319113 | orchestrator | Monday 05 January 2026 00:48:16 +0000 (0:00:01.124) 0:00:07.386 ******** 2026-01-05 00:50:39.319121 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:50:39.319127 | orchestrator | 2026-01-05 00:50:39.319134 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 00:50:39.319139 | orchestrator | Monday 05 January 2026 00:48:16 +0000 (0:00:00.702) 0:00:08.088 ******** 2026-01-05 00:50:39.319143 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:39.319146 | orchestrator | 2026-01-05 00:50:39.319150 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-05 00:50:39.319154 | orchestrator | Monday 05 January 2026 00:48:18 +0000 (0:00:01.137) 0:00:09.226 ******** 2026-01-05 00:50:39.319158 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:39.319162 | orchestrator | 2026-01-05 00:50:39.319166 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-05 00:50:39.319170 | orchestrator | Monday 05 January 2026 00:48:18 +0000 (0:00:00.475) 0:00:09.702 ******** 2026-01-05 00:50:39.319173 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:39.319177 | orchestrator | 2026-01-05 00:50:39.319181 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-05 00:50:39.319185 | orchestrator | Monday 05 January 2026 00:48:19 +0000 (0:00:00.485) 0:00:10.187 ******** 2026-01-05 00:50:39.319189 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:39.319192 | orchestrator | 2026-01-05 00:50:39.319196 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-05 00:50:39.319200 | orchestrator | Monday 05 January 2026 00:48:19 +0000 (0:00:00.519) 0:00:10.707 ******** 2026-01-05 00:50:39.319204 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:39.319208 | orchestrator | 2026-01-05 00:50:39.319212 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 00:50:39.319215 | orchestrator | Monday 05 January 2026 00:48:21 +0000 (0:00:01.470) 0:00:12.177 ******** 2026-01-05 00:50:39.319219 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:50:39.319223 | orchestrator | 2026-01-05 00:50:39.319227 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 00:50:39.319241 | orchestrator | Monday 05 January 2026 00:48:22 +0000 (0:00:01.472) 0:00:13.650 ******** 2026-01-05 00:50:39.319245 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:39.319249 | orchestrator | 2026-01-05 00:50:39.319253 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-05 00:50:39.319257 | orchestrator | Monday 05 January 2026 00:48:23 +0000 (0:00:01.067) 0:00:14.717 ******** 2026-01-05 00:50:39.319261 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:39.319265 | orchestrator | 2026-01-05 00:50:39.319268 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-05 00:50:39.319272 | orchestrator | Monday 05 January 2026 00:48:24 +0000 (0:00:01.406) 0:00:16.123 ******** 2026-01-05 00:50:39.319276 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:39.319280 | orchestrator | 2026-01-05 00:50:39.319284 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-05 00:50:39.319288 | orchestrator | Monday 05 January 2026 00:48:26 +0000 (0:00:01.059) 0:00:17.183 ******** 2026-01-05 00:50:39.319299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:50:39.319306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:50:39.319311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:50:39.319319 | orchestrator | 2026-01-05 00:50:39.319323 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-05 00:50:39.319327 | orchestrator | Monday 05 January 2026 00:48:28 +0000 (0:00:02.099) 0:00:19.282 ******** 2026-01-05 00:50:39.319338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:50:39.319343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:50:39.319347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:50:39.319351 | orchestrator | 2026-01-05 00:50:39.319355 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-05 00:50:39.319359 | orchestrator | Monday 05 January 2026 00:48:30 +0000 (0:00:01.997) 0:00:21.280 ******** 2026-01-05 00:50:39.319365 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 00:50:39.319378 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 00:50:39.319384 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 00:50:39.319390 | orchestrator | 2026-01-05 00:50:39.319396 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-05 00:50:39.319405 | orchestrator | Monday 05 January 2026 00:48:32 +0000 (0:00:02.134) 0:00:23.415 ******** 2026-01-05 00:50:39.319414 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 00:50:39.319425 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 00:50:39.319435 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 00:50:39.319446 | orchestrator | 2026-01-05 00:50:39.319461 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-05 00:50:39.319468 | orchestrator | Monday 05 January 2026 00:48:35 +0000 (0:00:03.242) 0:00:26.658 ******** 2026-01-05 00:50:39.319474 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 00:50:39.319480 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 00:50:39.319485 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 00:50:39.319491 | orchestrator | 2026-01-05 00:50:39.319497 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-05 00:50:39.319503 | orchestrator | Monday 05 January 2026 00:48:37 +0000 (0:00:02.341) 0:00:29.000 ******** 2026-01-05 00:50:39.319509 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 00:50:39.319516 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 00:50:39.319522 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 00:50:39.319528 | orchestrator | 2026-01-05 00:50:39.319539 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-05 00:50:39.319546 | orchestrator | Monday 05 January 2026 00:48:40 +0000 (0:00:02.918) 0:00:31.918 ******** 2026-01-05 00:50:39.319550 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 00:50:39.319554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 00:50:39.319559 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 00:50:39.319562 | orchestrator | 2026-01-05 00:50:39.319566 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-05 00:50:39.319570 | orchestrator | Monday 05 January 2026 00:48:42 +0000 (0:00:02.112) 0:00:34.031 ******** 2026-01-05 00:50:39.319574 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 00:50:39.319578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 00:50:39.319582 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 00:50:39.319585 | orchestrator | 2026-01-05 00:50:39.319589 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 00:50:39.319593 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:01.666) 0:00:35.697 ******** 2026-01-05 00:50:39.319597 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:39.319601 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:39.319605 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:39.319609 | orchestrator | 2026-01-05 00:50:39.319613 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-05 00:50:39.319621 | orchestrator | Monday 05 January 2026 00:48:45 +0000 (0:00:00.746) 0:00:36.443 ******** 2026-01-05 00:50:39.319626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:50:39.319636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:50:39.319644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:50:39.319648 | orchestrator | 2026-01-05 00:50:39.319652 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-05 00:50:39.319656 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:01.700) 0:00:38.144 ******** 2026-01-05 00:50:39.319660 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:39.319664 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:39.319668 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:39.319671 | orchestrator | 2026-01-05 00:50:39.319675 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-05 00:50:39.319679 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:00.959) 0:00:39.104 ******** 2026-01-05 00:50:39.319688 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:39.319693 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:39.319697 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:39.319700 | orchestrator | 2026-01-05 00:50:39.319704 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-05 00:50:39.319708 | orchestrator | Monday 05 January 2026 00:48:56 +0000 (0:00:08.455) 0:00:47.559 ******** 2026-01-05 00:50:39.319712 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:39.319716 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:39.319719 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:39.319723 | orchestrator | 2026-01-05 00:50:39.319727 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 00:50:39.319730 | orchestrator | 2026-01-05 00:50:39.319734 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 00:50:39.319741 | orchestrator | Monday 05 January 2026 00:48:56 +0000 (0:00:00.284) 0:00:47.844 ******** 2026-01-05 00:50:39.319747 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:39.319756 | orchestrator | 2026-01-05 00:50:39.319763 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 00:50:39.319768 | orchestrator | Monday 05 January 2026 00:48:57 +0000 (0:00:00.660) 0:00:48.504 ******** 2026-01-05 00:50:39.319774 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:39.319780 | orchestrator | 2026-01-05 00:50:39.319786 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 00:50:39.319792 | orchestrator | Monday 05 January 2026 00:48:57 +0000 (0:00:00.309) 0:00:48.813 ******** 2026-01-05 00:50:39.319797 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:39.319803 | orchestrator | 2026-01-05 00:50:39.319809 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 00:50:39.319814 | orchestrator | Monday 05 January 2026 00:48:59 +0000 (0:00:02.120) 0:00:50.933 ******** 2026-01-05 00:50:39.319820 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:50:39.319826 | orchestrator | 2026-01-05 00:50:39.319832 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 00:50:39.319838 | orchestrator | 2026-01-05 00:50:39.319843 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 00:50:39.319849 | orchestrator | Monday 05 January 2026 00:49:55 +0000 (0:00:56.201) 0:01:47.135 ******** 2026-01-05 00:50:39.319855 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:39.319861 | orchestrator | 2026-01-05 00:50:39.319867 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 00:50:39.319873 | orchestrator | Monday 05 January 2026 00:49:56 +0000 (0:00:00.640) 0:01:47.775 ******** 2026-01-05 00:50:39.319880 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:39.319886 | orchestrator | 2026-01-05 00:50:39.319893 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 00:50:39.319899 | orchestrator | Monday 05 January 2026 00:49:56 +0000 (0:00:00.213) 0:01:47.989 ******** 2026-01-05 00:50:39.319906 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:39.319912 | orchestrator | 2026-01-05 00:50:39.319917 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 00:50:39.319920 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:02.091) 0:01:50.081 ******** 2026-01-05 00:50:39.319924 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:50:39.319928 | orchestrator | 2026-01-05 00:50:39.319932 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 00:50:39.319935 | orchestrator | 2026-01-05 00:50:39.319939 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 00:50:39.319947 | orchestrator | Monday 05 January 2026 00:50:16 +0000 (0:00:17.841) 0:02:07.922 ******** 2026-01-05 00:50:39.319951 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:39.319955 | orchestrator | 2026-01-05 00:50:39.319959 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 00:50:39.319969 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:00.623) 0:02:08.545 ******** 2026-01-05 00:50:39.319973 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:39.319980 | orchestrator | 2026-01-05 00:50:39.319985 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 00:50:39.319991 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:00.242) 0:02:08.787 ******** 2026-01-05 00:50:39.319996 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:39.320001 | orchestrator | 2026-01-05 00:50:39.320010 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 00:50:39.320018 | orchestrator | Monday 05 January 2026 00:50:24 +0000 (0:00:06.783) 0:02:15.571 ******** 2026-01-05 00:50:39.320024 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:50:39.320030 | orchestrator | 2026-01-05 00:50:39.320037 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-05 00:50:39.320043 | orchestrator | 2026-01-05 00:50:39.320049 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-05 00:50:39.320060 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:09.614) 0:02:25.186 ******** 2026-01-05 00:50:39.320207 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:50:39.320212 | orchestrator | 2026-01-05 00:50:39.320216 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-05 00:50:39.320220 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.507) 0:02:25.693 ******** 2026-01-05 00:50:39.320224 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 00:50:39.320228 | orchestrator | enable_outward_rabbitmq_True 2026-01-05 00:50:39.320232 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 00:50:39.320235 | orchestrator | outward_rabbitmq_restart 2026-01-05 00:50:39.320240 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:50:39.320244 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:50:39.320247 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:50:39.320251 | orchestrator | 2026-01-05 00:50:39.320255 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-05 00:50:39.320259 | orchestrator | skipping: no hosts matched 2026-01-05 00:50:39.320263 | orchestrator | 2026-01-05 00:50:39.320267 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-05 00:50:39.320271 | orchestrator | skipping: no hosts matched 2026-01-05 00:50:39.320275 | orchestrator | 2026-01-05 00:50:39.320278 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-05 00:50:39.320282 | orchestrator | skipping: no hosts matched 2026-01-05 00:50:39.320286 | orchestrator | 2026-01-05 00:50:39.320290 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:50:39.320295 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-05 00:50:39.320300 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-05 00:50:39.320304 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:50:39.320308 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:50:39.320312 | orchestrator | 2026-01-05 00:50:39.320315 | orchestrator | 2026-01-05 00:50:39.320319 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:50:39.320323 | orchestrator | Monday 05 January 2026 00:50:37 +0000 (0:00:02.768) 0:02:28.462 ******** 2026-01-05 00:50:39.320327 | orchestrator | =============================================================================== 2026-01-05 00:50:39.320331 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.66s 2026-01-05 00:50:39.320344 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.00s 2026-01-05 00:50:39.320348 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.46s 2026-01-05 00:50:39.320352 | orchestrator | Check RabbitMQ service -------------------------------------------------- 5.28s 2026-01-05 00:50:39.320356 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.24s 2026-01-05 00:50:39.320360 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.92s 2026-01-05 00:50:39.320364 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.77s 2026-01-05 00:50:39.320367 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.34s 2026-01-05 00:50:39.320371 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.13s 2026-01-05 00:50:39.320376 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.11s 2026-01-05 00:50:39.320381 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.10s 2026-01-05 00:50:39.320387 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.00s 2026-01-05 00:50:39.320396 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.92s 2026-01-05 00:50:39.320405 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.70s 2026-01-05 00:50:39.320410 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.67s 2026-01-05 00:50:39.320424 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.47s 2026-01-05 00:50:39.320431 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.47s 2026-01-05 00:50:39.320437 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.41s 2026-01-05 00:50:39.320444 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.14s 2026-01-05 00:50:39.320451 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2026-01-05 00:50:39.320458 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:39.321606 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:39.322918 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:39.322964 | orchestrator | 2026-01-05 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:42.357347 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:42.357472 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:42.358731 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:42.360897 | orchestrator | 2026-01-05 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:45.401300 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:45.406081 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:45.407915 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:45.407960 | orchestrator | 2026-01-05 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:48.441303 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:48.441881 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:48.442101 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:48.442139 | orchestrator | 2026-01-05 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:51.479083 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:51.479222 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:51.479324 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:51.479361 | orchestrator | 2026-01-05 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:54.520416 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:54.520748 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:54.521688 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:54.521712 | orchestrator | 2026-01-05 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:57.564518 | orchestrator | 2026-01-05 00:50:57 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:50:57.569372 | orchestrator | 2026-01-05 00:50:57 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:50:57.571360 | orchestrator | 2026-01-05 00:50:57 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:50:57.571400 | orchestrator | 2026-01-05 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:00.619255 | orchestrator | 2026-01-05 00:51:00 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:00.620174 | orchestrator | 2026-01-05 00:51:00 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:00.621383 | orchestrator | 2026-01-05 00:51:00 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:00.621570 | orchestrator | 2026-01-05 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:03.679279 | orchestrator | 2026-01-05 00:51:03 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:03.682683 | orchestrator | 2026-01-05 00:51:03 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:03.685194 | orchestrator | 2026-01-05 00:51:03 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:03.685304 | orchestrator | 2026-01-05 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:06.722188 | orchestrator | 2026-01-05 00:51:06 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:06.722297 | orchestrator | 2026-01-05 00:51:06 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:06.723303 | orchestrator | 2026-01-05 00:51:06 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:06.723327 | orchestrator | 2026-01-05 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:09.765961 | orchestrator | 2026-01-05 00:51:09 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:09.771683 | orchestrator | 2026-01-05 00:51:09 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:09.773671 | orchestrator | 2026-01-05 00:51:09 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:09.773962 | orchestrator | 2026-01-05 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:12.828171 | orchestrator | 2026-01-05 00:51:12 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:12.829316 | orchestrator | 2026-01-05 00:51:12 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:12.831552 | orchestrator | 2026-01-05 00:51:12 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:12.831633 | orchestrator | 2026-01-05 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:15.873127 | orchestrator | 2026-01-05 00:51:15 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:15.873455 | orchestrator | 2026-01-05 00:51:15 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:15.874929 | orchestrator | 2026-01-05 00:51:15 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:15.874985 | orchestrator | 2026-01-05 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:18.911820 | orchestrator | 2026-01-05 00:51:18 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:18.915637 | orchestrator | 2026-01-05 00:51:18 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:18.918777 | orchestrator | 2026-01-05 00:51:18 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:18.918846 | orchestrator | 2026-01-05 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:21.964360 | orchestrator | 2026-01-05 00:51:21 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:21.968954 | orchestrator | 2026-01-05 00:51:21 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:21.971585 | orchestrator | 2026-01-05 00:51:21 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:21.973246 | orchestrator | 2026-01-05 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:25.005898 | orchestrator | 2026-01-05 00:51:25 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:25.006631 | orchestrator | 2026-01-05 00:51:25 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:25.008056 | orchestrator | 2026-01-05 00:51:25 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:25.008089 | orchestrator | 2026-01-05 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:28.065846 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:28.066884 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:28.068477 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:28.068518 | orchestrator | 2026-01-05 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:31.122842 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:31.123710 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state STARTED 2026-01-05 00:51:31.125640 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:31.125686 | orchestrator | 2026-01-05 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:34.159606 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:34.162508 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task 198f2fae-4cc4-434a-a2e6-4fe894fe613f is in state SUCCESS 2026-01-05 00:51:34.164902 | orchestrator | 2026-01-05 00:51:34.164959 | orchestrator | 2026-01-05 00:51:34.164967 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:51:34.164974 | orchestrator | 2026-01-05 00:51:34.164979 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:51:34.165024 | orchestrator | Monday 05 January 2026 00:49:02 +0000 (0:00:00.168) 0:00:00.168 ******** 2026-01-05 00:51:34.165028 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:51:34.165031 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:51:34.165043 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:51:34.165047 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.165050 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.165053 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.165056 | orchestrator | 2026-01-05 00:51:34.165060 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:51:34.165063 | orchestrator | Monday 05 January 2026 00:49:02 +0000 (0:00:00.641) 0:00:00.809 ******** 2026-01-05 00:51:34.165068 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-05 00:51:34.165075 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-05 00:51:34.165080 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-05 00:51:34.165086 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-05 00:51:34.165092 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-05 00:51:34.165098 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-05 00:51:34.165104 | orchestrator | 2026-01-05 00:51:34.165110 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-05 00:51:34.165117 | orchestrator | 2026-01-05 00:51:34.165123 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-05 00:51:34.165129 | orchestrator | Monday 05 January 2026 00:49:03 +0000 (0:00:00.888) 0:00:01.697 ******** 2026-01-05 00:51:34.165136 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:51:34.165142 | orchestrator | 2026-01-05 00:51:34.165148 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-05 00:51:34.165154 | orchestrator | Monday 05 January 2026 00:49:05 +0000 (0:00:01.318) 0:00:03.015 ******** 2026-01-05 00:51:34.165162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165209 | orchestrator | 2026-01-05 00:51:34.165214 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-05 00:51:34.165280 | orchestrator | Monday 05 January 2026 00:49:07 +0000 (0:00:02.077) 0:00:05.093 ******** 2026-01-05 00:51:34.165287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165311 | orchestrator | 2026-01-05 00:51:34.165314 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-05 00:51:34.165318 | orchestrator | Monday 05 January 2026 00:49:09 +0000 (0:00:01.980) 0:00:07.073 ******** 2026-01-05 00:51:34.165321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165382 | orchestrator | 2026-01-05 00:51:34.165385 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-05 00:51:34.165389 | orchestrator | Monday 05 January 2026 00:49:10 +0000 (0:00:01.361) 0:00:08.435 ******** 2026-01-05 00:51:34.165392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165420 | orchestrator | 2026-01-05 00:51:34.165423 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-05 00:51:34.165426 | orchestrator | Monday 05 January 2026 00:49:12 +0000 (0:00:01.689) 0:00:10.125 ******** 2026-01-05 00:51:34.165429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.165451 | orchestrator | 2026-01-05 00:51:34.165454 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-05 00:51:34.165457 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:01.226) 0:00:11.351 ******** 2026-01-05 00:51:34.165460 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:34.165464 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:34.165467 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:34.165472 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:34.165477 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:34.165481 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:34.165486 | orchestrator | 2026-01-05 00:51:34.165490 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-05 00:51:34.165495 | orchestrator | Monday 05 January 2026 00:49:15 +0000 (0:00:02.311) 0:00:13.663 ******** 2026-01-05 00:51:34.165501 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-05 00:51:34.165507 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-05 00:51:34.165512 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-05 00:51:34.165520 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-05 00:51:34.165525 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-05 00:51:34.165530 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-05 00:51:34.165537 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:51:34.165543 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:51:34.165548 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:51:34.165553 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:51:34.165558 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:51:34.165564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:51:34.165569 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:51:34.165581 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:51:34.165587 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:51:34.165593 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:51:34.165598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:51:34.165604 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:51:34.165610 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:51:34.165615 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:51:34.165621 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:51:34.165626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:51:34.165631 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:51:34.165636 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:51:34.165641 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:51:34.165647 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:51:34.165652 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:51:34.165657 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:51:34.165663 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:51:34.165668 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:51:34.165673 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:51:34.165678 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:51:34.165684 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:51:34.165689 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:51:34.165695 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:51:34.165700 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:51:34.165705 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 00:51:34.165711 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 00:51:34.165716 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 00:51:34.165722 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 00:51:34.165731 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 00:51:34.165738 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 00:51:34.165748 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-05 00:51:34.165755 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-05 00:51:34.165761 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-05 00:51:34.165768 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-05 00:51:34.165774 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-05 00:51:34.165779 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-05 00:51:34.165785 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 00:51:34.165791 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 00:51:34.165797 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 00:51:34.165802 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 00:51:34.165808 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 00:51:34.165814 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 00:51:34.165820 | orchestrator | 2026-01-05 00:51:34.165826 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:51:34.165832 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:20.122) 0:00:33.786 ******** 2026-01-05 00:51:34.165838 | orchestrator | 2026-01-05 00:51:34.165844 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:51:34.165849 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:00.066) 0:00:33.852 ******** 2026-01-05 00:51:34.165855 | orchestrator | 2026-01-05 00:51:34.165861 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:51:34.165867 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:00.061) 0:00:33.913 ******** 2026-01-05 00:51:34.165873 | orchestrator | 2026-01-05 00:51:34.165879 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:51:34.165885 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:00.063) 0:00:33.977 ******** 2026-01-05 00:51:34.165890 | orchestrator | 2026-01-05 00:51:34.165896 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:51:34.165902 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:00.059) 0:00:34.036 ******** 2026-01-05 00:51:34.165908 | orchestrator | 2026-01-05 00:51:34.165914 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:51:34.165919 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:00.063) 0:00:34.100 ******** 2026-01-05 00:51:34.165925 | orchestrator | 2026-01-05 00:51:34.165931 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-05 00:51:34.165937 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:00.062) 0:00:34.162 ******** 2026-01-05 00:51:34.165943 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:51:34.165949 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:51:34.165955 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:51:34.165961 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.165967 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.165973 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.166082 | orchestrator | 2026-01-05 00:51:34.166094 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-05 00:51:34.166100 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:02.158) 0:00:36.320 ******** 2026-01-05 00:51:34.166106 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:34.166112 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:34.166118 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:34.166123 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:34.166128 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:34.166133 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:34.166138 | orchestrator | 2026-01-05 00:51:34.166144 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-05 00:51:34.166149 | orchestrator | 2026-01-05 00:51:34.166154 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 00:51:34.166160 | orchestrator | Monday 05 January 2026 00:50:15 +0000 (0:00:37.233) 0:01:13.553 ******** 2026-01-05 00:51:34.166166 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:51:34.166171 | orchestrator | 2026-01-05 00:51:34.166211 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 00:51:34.166221 | orchestrator | Monday 05 January 2026 00:50:16 +0000 (0:00:00.747) 0:01:14.301 ******** 2026-01-05 00:51:34.166224 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:51:34.166227 | orchestrator | 2026-01-05 00:51:34.166236 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-05 00:51:34.166240 | orchestrator | Monday 05 January 2026 00:50:16 +0000 (0:00:00.547) 0:01:14.849 ******** 2026-01-05 00:51:34.166243 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.166246 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.166249 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.166252 | orchestrator | 2026-01-05 00:51:34.166257 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-05 00:51:34.166260 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:00.973) 0:01:15.822 ******** 2026-01-05 00:51:34.166263 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.166266 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.166270 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.166273 | orchestrator | 2026-01-05 00:51:34.166276 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-05 00:51:34.166279 | orchestrator | Monday 05 January 2026 00:50:18 +0000 (0:00:00.365) 0:01:16.188 ******** 2026-01-05 00:51:34.166282 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.166285 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.166288 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.166291 | orchestrator | 2026-01-05 00:51:34.166294 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-05 00:51:34.166297 | orchestrator | Monday 05 January 2026 00:50:18 +0000 (0:00:00.348) 0:01:16.536 ******** 2026-01-05 00:51:34.166300 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.166303 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.166306 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.166309 | orchestrator | 2026-01-05 00:51:34.166312 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-05 00:51:34.166315 | orchestrator | Monday 05 January 2026 00:50:18 +0000 (0:00:00.355) 0:01:16.892 ******** 2026-01-05 00:51:34.166318 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.166322 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.166325 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.166328 | orchestrator | 2026-01-05 00:51:34.166331 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-05 00:51:34.166334 | orchestrator | Monday 05 January 2026 00:50:19 +0000 (0:00:00.601) 0:01:17.493 ******** 2026-01-05 00:51:34.166337 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166344 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166347 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166350 | orchestrator | 2026-01-05 00:51:34.166355 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-05 00:51:34.166360 | orchestrator | Monday 05 January 2026 00:50:19 +0000 (0:00:00.268) 0:01:17.762 ******** 2026-01-05 00:51:34.166365 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166370 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166375 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166380 | orchestrator | 2026-01-05 00:51:34.166384 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-05 00:51:34.166389 | orchestrator | Monday 05 January 2026 00:50:20 +0000 (0:00:00.284) 0:01:18.047 ******** 2026-01-05 00:51:34.166394 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166399 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166404 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166409 | orchestrator | 2026-01-05 00:51:34.166415 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-05 00:51:34.166420 | orchestrator | Monday 05 January 2026 00:50:20 +0000 (0:00:00.290) 0:01:18.338 ******** 2026-01-05 00:51:34.166425 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166432 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166439 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166444 | orchestrator | 2026-01-05 00:51:34.166449 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-05 00:51:34.166455 | orchestrator | Monday 05 January 2026 00:50:20 +0000 (0:00:00.468) 0:01:18.806 ******** 2026-01-05 00:51:34.166460 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166465 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166470 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166475 | orchestrator | 2026-01-05 00:51:34.166480 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-05 00:51:34.166485 | orchestrator | Monday 05 January 2026 00:50:21 +0000 (0:00:00.333) 0:01:19.140 ******** 2026-01-05 00:51:34.166490 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166495 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166500 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166505 | orchestrator | 2026-01-05 00:51:34.166510 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-05 00:51:34.166516 | orchestrator | Monday 05 January 2026 00:50:21 +0000 (0:00:00.276) 0:01:19.417 ******** 2026-01-05 00:51:34.166530 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166535 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166540 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166546 | orchestrator | 2026-01-05 00:51:34.166551 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-05 00:51:34.166557 | orchestrator | Monday 05 January 2026 00:50:21 +0000 (0:00:00.259) 0:01:19.677 ******** 2026-01-05 00:51:34.166560 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166563 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166566 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166569 | orchestrator | 2026-01-05 00:51:34.166572 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-05 00:51:34.166575 | orchestrator | Monday 05 January 2026 00:50:22 +0000 (0:00:00.411) 0:01:20.088 ******** 2026-01-05 00:51:34.166578 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166581 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166585 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166588 | orchestrator | 2026-01-05 00:51:34.166591 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-05 00:51:34.166594 | orchestrator | Monday 05 January 2026 00:50:22 +0000 (0:00:00.276) 0:01:20.364 ******** 2026-01-05 00:51:34.166597 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166600 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166608 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166611 | orchestrator | 2026-01-05 00:51:34.166618 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-05 00:51:34.166621 | orchestrator | Monday 05 January 2026 00:50:22 +0000 (0:00:00.325) 0:01:20.689 ******** 2026-01-05 00:51:34.166624 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166627 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166630 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166633 | orchestrator | 2026-01-05 00:51:34.166637 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-05 00:51:34.166643 | orchestrator | Monday 05 January 2026 00:50:22 +0000 (0:00:00.277) 0:01:20.967 ******** 2026-01-05 00:51:34.166646 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166649 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166652 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166655 | orchestrator | 2026-01-05 00:51:34.166658 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 00:51:34.166661 | orchestrator | Monday 05 January 2026 00:50:23 +0000 (0:00:00.307) 0:01:21.275 ******** 2026-01-05 00:51:34.166664 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:51:34.166667 | orchestrator | 2026-01-05 00:51:34.166670 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-05 00:51:34.166673 | orchestrator | Monday 05 January 2026 00:50:23 +0000 (0:00:00.685) 0:01:21.961 ******** 2026-01-05 00:51:34.166676 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.166680 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.166683 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.166687 | orchestrator | 2026-01-05 00:51:34.166692 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-05 00:51:34.166695 | orchestrator | Monday 05 January 2026 00:50:24 +0000 (0:00:00.427) 0:01:22.389 ******** 2026-01-05 00:51:34.166698 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.166702 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.166705 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.166708 | orchestrator | 2026-01-05 00:51:34.166711 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-05 00:51:34.166714 | orchestrator | Monday 05 January 2026 00:50:24 +0000 (0:00:00.394) 0:01:22.783 ******** 2026-01-05 00:51:34.166717 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166720 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166723 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166726 | orchestrator | 2026-01-05 00:51:34.166729 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-05 00:51:34.166732 | orchestrator | Monday 05 January 2026 00:50:25 +0000 (0:00:00.479) 0:01:23.262 ******** 2026-01-05 00:51:34.166735 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166738 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166741 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166744 | orchestrator | 2026-01-05 00:51:34.166747 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-05 00:51:34.166750 | orchestrator | Monday 05 January 2026 00:50:25 +0000 (0:00:00.342) 0:01:23.605 ******** 2026-01-05 00:51:34.166754 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166757 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166760 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166763 | orchestrator | 2026-01-05 00:51:34.166766 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-05 00:51:34.166769 | orchestrator | Monday 05 January 2026 00:50:25 +0000 (0:00:00.310) 0:01:23.916 ******** 2026-01-05 00:51:34.166772 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166775 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166778 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166781 | orchestrator | 2026-01-05 00:51:34.166787 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-05 00:51:34.166790 | orchestrator | Monday 05 January 2026 00:50:26 +0000 (0:00:00.338) 0:01:24.254 ******** 2026-01-05 00:51:34.166793 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166796 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166799 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166802 | orchestrator | 2026-01-05 00:51:34.166805 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-05 00:51:34.166808 | orchestrator | Monday 05 January 2026 00:50:26 +0000 (0:00:00.459) 0:01:24.713 ******** 2026-01-05 00:51:34.166811 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.166815 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.166818 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.166821 | orchestrator | 2026-01-05 00:51:34.166824 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-05 00:51:34.166827 | orchestrator | Monday 05 January 2026 00:50:27 +0000 (0:00:00.382) 0:01:25.095 ******** 2026-01-05 00:51:34.166831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166873 | orchestrator | 2026-01-05 00:51:34.166876 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-05 00:51:34.166880 | orchestrator | Monday 05 January 2026 00:50:28 +0000 (0:00:01.471) 0:01:26.567 ******** 2026-01-05 00:51:34.166883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166918 | orchestrator | 2026-01-05 00:51:34.166921 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-05 00:51:34.166926 | orchestrator | Monday 05 January 2026 00:50:32 +0000 (0:00:03.768) 0:01:30.335 ******** 2026-01-05 00:51:34.166932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.166968 | orchestrator | 2026-01-05 00:51:34.166971 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:51:34.166974 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:02.169) 0:01:32.505 ******** 2026-01-05 00:51:34.166977 | orchestrator | 2026-01-05 00:51:34.166999 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:51:34.167003 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.075) 0:01:32.580 ******** 2026-01-05 00:51:34.167006 | orchestrator | 2026-01-05 00:51:34.167010 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:51:34.167013 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.080) 0:01:32.661 ******** 2026-01-05 00:51:34.167016 | orchestrator | 2026-01-05 00:51:34.167019 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-05 00:51:34.167022 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.079) 0:01:32.740 ******** 2026-01-05 00:51:34.167025 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:34.167028 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:34.167031 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:34.167036 | orchestrator | 2026-01-05 00:51:34.167041 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-05 00:51:34.167045 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:07.702) 0:01:40.443 ******** 2026-01-05 00:51:34.167054 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:34.167061 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:34.167066 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:34.167070 | orchestrator | 2026-01-05 00:51:34.167074 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-05 00:51:34.167079 | orchestrator | Monday 05 January 2026 00:50:45 +0000 (0:00:03.000) 0:01:43.444 ******** 2026-01-05 00:51:34.167083 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:34.167088 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:34.167092 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:34.167097 | orchestrator | 2026-01-05 00:51:34.167102 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-05 00:51:34.167107 | orchestrator | Monday 05 January 2026 00:50:53 +0000 (0:00:07.766) 0:01:51.211 ******** 2026-01-05 00:51:34.167112 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.167117 | orchestrator | 2026-01-05 00:51:34.167122 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-05 00:51:34.167128 | orchestrator | Monday 05 January 2026 00:50:53 +0000 (0:00:00.363) 0:01:51.575 ******** 2026-01-05 00:51:34.167131 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.167134 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.167137 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.167141 | orchestrator | 2026-01-05 00:51:34.167147 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-05 00:51:34.167159 | orchestrator | Monday 05 January 2026 00:50:54 +0000 (0:00:00.800) 0:01:52.375 ******** 2026-01-05 00:51:34.167165 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.167171 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.167176 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:34.167181 | orchestrator | 2026-01-05 00:51:34.167186 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-05 00:51:34.167194 | orchestrator | Monday 05 January 2026 00:50:54 +0000 (0:00:00.593) 0:01:52.969 ******** 2026-01-05 00:51:34.167202 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.167207 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.167212 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.167217 | orchestrator | 2026-01-05 00:51:34.167222 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-05 00:51:34.167227 | orchestrator | Monday 05 January 2026 00:50:55 +0000 (0:00:00.791) 0:01:53.760 ******** 2026-01-05 00:51:34.167233 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.167237 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.167241 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:34.167247 | orchestrator | 2026-01-05 00:51:34.167252 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-05 00:51:34.167257 | orchestrator | Monday 05 January 2026 00:50:56 +0000 (0:00:00.644) 0:01:54.405 ******** 2026-01-05 00:51:34.167263 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.167268 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.167273 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.167278 | orchestrator | 2026-01-05 00:51:34.167283 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-05 00:51:34.167286 | orchestrator | Monday 05 January 2026 00:50:57 +0000 (0:00:01.093) 0:01:55.498 ******** 2026-01-05 00:51:34.167289 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.167292 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.167295 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.167298 | orchestrator | 2026-01-05 00:51:34.167302 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-05 00:51:34.167305 | orchestrator | Monday 05 January 2026 00:50:58 +0000 (0:00:00.820) 0:01:56.318 ******** 2026-01-05 00:51:34.167308 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.167311 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.167314 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.167317 | orchestrator | 2026-01-05 00:51:34.167320 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-05 00:51:34.167323 | orchestrator | Monday 05 January 2026 00:50:58 +0000 (0:00:00.299) 0:01:56.618 ******** 2026-01-05 00:51:34.167327 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167332 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167337 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167342 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167355 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167360 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167375 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167380 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167386 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167391 | orchestrator | 2026-01-05 00:51:34.167397 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-05 00:51:34.167402 | orchestrator | Monday 05 January 2026 00:51:00 +0000 (0:00:01.472) 0:01:58.090 ******** 2026-01-05 00:51:34.167408 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167413 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167419 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167422 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167439 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167457 | orchestrator | 2026-01-05 00:51:34.167462 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-05 00:51:34.167468 | orchestrator | Monday 05 January 2026 00:51:04 +0000 (0:00:04.419) 0:02:02.510 ******** 2026-01-05 00:51:34.167473 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167478 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167483 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167502 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167527 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:34.167533 | orchestrator | 2026-01-05 00:51:34.167538 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:51:34.167544 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:02.926) 0:02:05.436 ******** 2026-01-05 00:51:34.167549 | orchestrator | 2026-01-05 00:51:34.167554 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:51:34.167560 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:00.069) 0:02:05.506 ******** 2026-01-05 00:51:34.167565 | orchestrator | 2026-01-05 00:51:34.167570 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:51:34.167575 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:00.095) 0:02:05.601 ******** 2026-01-05 00:51:34.167580 | orchestrator | 2026-01-05 00:51:34.167585 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-05 00:51:34.167590 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:00.068) 0:02:05.669 ******** 2026-01-05 00:51:34.167595 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:34.167600 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:34.167605 | orchestrator | 2026-01-05 00:51:34.167611 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-05 00:51:34.167616 | orchestrator | Monday 05 January 2026 00:51:13 +0000 (0:00:06.252) 0:02:11.922 ******** 2026-01-05 00:51:34.167625 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:34.167630 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:34.167636 | orchestrator | 2026-01-05 00:51:34.167641 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-05 00:51:34.167646 | orchestrator | Monday 05 January 2026 00:51:20 +0000 (0:00:06.220) 0:02:18.143 ******** 2026-01-05 00:51:34.167651 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:34.167657 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:34.167662 | orchestrator | 2026-01-05 00:51:34.167667 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-05 00:51:34.167670 | orchestrator | Monday 05 January 2026 00:51:26 +0000 (0:00:06.820) 0:02:24.963 ******** 2026-01-05 00:51:34.167674 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:34.167678 | orchestrator | 2026-01-05 00:51:34.167684 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-05 00:51:34.167688 | orchestrator | Monday 05 January 2026 00:51:27 +0000 (0:00:00.223) 0:02:25.187 ******** 2026-01-05 00:51:34.167693 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.167697 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.167702 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.167706 | orchestrator | 2026-01-05 00:51:34.167710 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-05 00:51:34.167718 | orchestrator | Monday 05 January 2026 00:51:28 +0000 (0:00:00.833) 0:02:26.020 ******** 2026-01-05 00:51:34.167723 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.167728 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.167733 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:34.167737 | orchestrator | 2026-01-05 00:51:34.167742 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-05 00:51:34.167747 | orchestrator | Monday 05 January 2026 00:51:28 +0000 (0:00:00.586) 0:02:26.607 ******** 2026-01-05 00:51:34.167752 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.167757 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.167761 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.167766 | orchestrator | 2026-01-05 00:51:34.167772 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-05 00:51:34.167778 | orchestrator | Monday 05 January 2026 00:51:29 +0000 (0:00:00.812) 0:02:27.419 ******** 2026-01-05 00:51:34.167783 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:34.167788 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:34.167793 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:34.167798 | orchestrator | 2026-01-05 00:51:34.167803 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-05 00:51:34.167808 | orchestrator | Monday 05 January 2026 00:51:30 +0000 (0:00:00.835) 0:02:28.255 ******** 2026-01-05 00:51:34.167813 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.167818 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.167824 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.167828 | orchestrator | 2026-01-05 00:51:34.167834 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-05 00:51:34.167839 | orchestrator | Monday 05 January 2026 00:51:31 +0000 (0:00:00.783) 0:02:29.039 ******** 2026-01-05 00:51:34.167844 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:34.167849 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:34.167854 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:34.167859 | orchestrator | 2026-01-05 00:51:34.167865 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:51:34.167870 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-05 00:51:34.167876 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-05 00:51:34.167886 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-05 00:51:34.167896 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:34.167905 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:34.167910 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:34.167915 | orchestrator | 2026-01-05 00:51:34.167920 | orchestrator | 2026-01-05 00:51:34.167926 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:51:34.167931 | orchestrator | Monday 05 January 2026 00:51:32 +0000 (0:00:00.957) 0:02:29.996 ******** 2026-01-05 00:51:34.167936 | orchestrator | =============================================================================== 2026-01-05 00:51:34.167941 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 37.23s 2026-01-05 00:51:34.167947 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.12s 2026-01-05 00:51:34.167952 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.59s 2026-01-05 00:51:34.167957 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.95s 2026-01-05 00:51:34.167962 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.22s 2026-01-05 00:51:34.167967 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.42s 2026-01-05 00:51:34.167972 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.77s 2026-01-05 00:51:34.167977 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.93s 2026-01-05 00:51:34.168010 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.31s 2026-01-05 00:51:34.168016 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.17s 2026-01-05 00:51:34.168021 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.16s 2026-01-05 00:51:34.168026 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.08s 2026-01-05 00:51:34.168032 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.98s 2026-01-05 00:51:34.168037 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.69s 2026-01-05 00:51:34.168042 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-01-05 00:51:34.168047 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-01-05 00:51:34.168053 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.36s 2026-01-05 00:51:34.168058 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.32s 2026-01-05 00:51:34.168063 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.23s 2026-01-05 00:51:34.168069 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.09s 2026-01-05 00:51:34.168074 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:34.168080 | orchestrator | 2026-01-05 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:37.201009 | orchestrator | 2026-01-05 00:51:37 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:37.202131 | orchestrator | 2026-01-05 00:51:37 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:37.202171 | orchestrator | 2026-01-05 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:40.237048 | orchestrator | 2026-01-05 00:51:40 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:40.238746 | orchestrator | 2026-01-05 00:51:40 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:40.238795 | orchestrator | 2026-01-05 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:43.272322 | orchestrator | 2026-01-05 00:51:43 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:43.274212 | orchestrator | 2026-01-05 00:51:43 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:43.274248 | orchestrator | 2026-01-05 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:46.319086 | orchestrator | 2026-01-05 00:51:46 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:46.320851 | orchestrator | 2026-01-05 00:51:46 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:46.321093 | orchestrator | 2026-01-05 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:49.376050 | orchestrator | 2026-01-05 00:51:49 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:49.379193 | orchestrator | 2026-01-05 00:51:49 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:49.379230 | orchestrator | 2026-01-05 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:52.432707 | orchestrator | 2026-01-05 00:51:52 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:52.434781 | orchestrator | 2026-01-05 00:51:52 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:52.434821 | orchestrator | 2026-01-05 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:55.468498 | orchestrator | 2026-01-05 00:51:55 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:55.470882 | orchestrator | 2026-01-05 00:51:55 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:55.471430 | orchestrator | 2026-01-05 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:58.503869 | orchestrator | 2026-01-05 00:51:58 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:51:58.504022 | orchestrator | 2026-01-05 00:51:58 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:51:58.504210 | orchestrator | 2026-01-05 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:01.540304 | orchestrator | 2026-01-05 00:52:01 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:01.540403 | orchestrator | 2026-01-05 00:52:01 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:01.540415 | orchestrator | 2026-01-05 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:04.580822 | orchestrator | 2026-01-05 00:52:04 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:04.581601 | orchestrator | 2026-01-05 00:52:04 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:04.581625 | orchestrator | 2026-01-05 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:07.632376 | orchestrator | 2026-01-05 00:52:07 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:07.633516 | orchestrator | 2026-01-05 00:52:07 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:07.633606 | orchestrator | 2026-01-05 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:10.674592 | orchestrator | 2026-01-05 00:52:10 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:10.675264 | orchestrator | 2026-01-05 00:52:10 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:10.675289 | orchestrator | 2026-01-05 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:13.707696 | orchestrator | 2026-01-05 00:52:13 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:13.708126 | orchestrator | 2026-01-05 00:52:13 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:13.708144 | orchestrator | 2026-01-05 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:16.743957 | orchestrator | 2026-01-05 00:52:16 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:16.744503 | orchestrator | 2026-01-05 00:52:16 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:16.744533 | orchestrator | 2026-01-05 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:19.778203 | orchestrator | 2026-01-05 00:52:19 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:19.779365 | orchestrator | 2026-01-05 00:52:19 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:19.779436 | orchestrator | 2026-01-05 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:22.814294 | orchestrator | 2026-01-05 00:52:22 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:22.817110 | orchestrator | 2026-01-05 00:52:22 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:22.818281 | orchestrator | 2026-01-05 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:25.863558 | orchestrator | 2026-01-05 00:52:25 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:25.864159 | orchestrator | 2026-01-05 00:52:25 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:25.864197 | orchestrator | 2026-01-05 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:28.907609 | orchestrator | 2026-01-05 00:52:28 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:28.909333 | orchestrator | 2026-01-05 00:52:28 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:28.909963 | orchestrator | 2026-01-05 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:31.954222 | orchestrator | 2026-01-05 00:52:31 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:31.955766 | orchestrator | 2026-01-05 00:52:31 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:31.956060 | orchestrator | 2026-01-05 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:35.005435 | orchestrator | 2026-01-05 00:52:35 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:35.007559 | orchestrator | 2026-01-05 00:52:35 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:35.007843 | orchestrator | 2026-01-05 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:38.043601 | orchestrator | 2026-01-05 00:52:38 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:38.044685 | orchestrator | 2026-01-05 00:52:38 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:38.044723 | orchestrator | 2026-01-05 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:41.103441 | orchestrator | 2026-01-05 00:52:41 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:41.108395 | orchestrator | 2026-01-05 00:52:41 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:41.108514 | orchestrator | 2026-01-05 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:44.152083 | orchestrator | 2026-01-05 00:52:44 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:44.153422 | orchestrator | 2026-01-05 00:52:44 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:44.153477 | orchestrator | 2026-01-05 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:47.200620 | orchestrator | 2026-01-05 00:52:47 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:47.203563 | orchestrator | 2026-01-05 00:52:47 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:47.203638 | orchestrator | 2026-01-05 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:50.255436 | orchestrator | 2026-01-05 00:52:50 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:50.257220 | orchestrator | 2026-01-05 00:52:50 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:50.257551 | orchestrator | 2026-01-05 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:53.295098 | orchestrator | 2026-01-05 00:52:53 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:53.295159 | orchestrator | 2026-01-05 00:52:53 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:53.295167 | orchestrator | 2026-01-05 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:56.330802 | orchestrator | 2026-01-05 00:52:56 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:56.332305 | orchestrator | 2026-01-05 00:52:56 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:56.333720 | orchestrator | 2026-01-05 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:59.381310 | orchestrator | 2026-01-05 00:52:59 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:52:59.387932 | orchestrator | 2026-01-05 00:52:59 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:52:59.387973 | orchestrator | 2026-01-05 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:02.443641 | orchestrator | 2026-01-05 00:53:02 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:02.445494 | orchestrator | 2026-01-05 00:53:02 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:02.445561 | orchestrator | 2026-01-05 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:05.504364 | orchestrator | 2026-01-05 00:53:05 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:05.508301 | orchestrator | 2026-01-05 00:53:05 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:05.508374 | orchestrator | 2026-01-05 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:08.564598 | orchestrator | 2026-01-05 00:53:08 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:08.566929 | orchestrator | 2026-01-05 00:53:08 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:08.567416 | orchestrator | 2026-01-05 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:11.619104 | orchestrator | 2026-01-05 00:53:11 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:11.621138 | orchestrator | 2026-01-05 00:53:11 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:11.621920 | orchestrator | 2026-01-05 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:14.689932 | orchestrator | 2026-01-05 00:53:14 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:14.693110 | orchestrator | 2026-01-05 00:53:14 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:14.693198 | orchestrator | 2026-01-05 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:17.733470 | orchestrator | 2026-01-05 00:53:17 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:17.734285 | orchestrator | 2026-01-05 00:53:17 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:17.734501 | orchestrator | 2026-01-05 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:20.779233 | orchestrator | 2026-01-05 00:53:20 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:20.779464 | orchestrator | 2026-01-05 00:53:20 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:20.779523 | orchestrator | 2026-01-05 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:23.817226 | orchestrator | 2026-01-05 00:53:23 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:23.818841 | orchestrator | 2026-01-05 00:53:23 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:23.818914 | orchestrator | 2026-01-05 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:26.856704 | orchestrator | 2026-01-05 00:53:26 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:26.859358 | orchestrator | 2026-01-05 00:53:26 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:26.859424 | orchestrator | 2026-01-05 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:29.888630 | orchestrator | 2026-01-05 00:53:29 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:29.889793 | orchestrator | 2026-01-05 00:53:29 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:29.889863 | orchestrator | 2026-01-05 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:32.950844 | orchestrator | 2026-01-05 00:53:32 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:32.953866 | orchestrator | 2026-01-05 00:53:32 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:32.953950 | orchestrator | 2026-01-05 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:36.011179 | orchestrator | 2026-01-05 00:53:36 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:36.011990 | orchestrator | 2026-01-05 00:53:36 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:36.012827 | orchestrator | 2026-01-05 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:39.079990 | orchestrator | 2026-01-05 00:53:39 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:39.083479 | orchestrator | 2026-01-05 00:53:39 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:39.083567 | orchestrator | 2026-01-05 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:42.130504 | orchestrator | 2026-01-05 00:53:42 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:42.131932 | orchestrator | 2026-01-05 00:53:42 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:42.131975 | orchestrator | 2026-01-05 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:45.172140 | orchestrator | 2026-01-05 00:53:45 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:45.172988 | orchestrator | 2026-01-05 00:53:45 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:45.173083 | orchestrator | 2026-01-05 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:48.207879 | orchestrator | 2026-01-05 00:53:48 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:48.207979 | orchestrator | 2026-01-05 00:53:48 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:48.207994 | orchestrator | 2026-01-05 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:51.322268 | orchestrator | 2026-01-05 00:53:51 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:51.325109 | orchestrator | 2026-01-05 00:53:51 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:51.325184 | orchestrator | 2026-01-05 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:54.369865 | orchestrator | 2026-01-05 00:53:54 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:54.374296 | orchestrator | 2026-01-05 00:53:54 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:54.374404 | orchestrator | 2026-01-05 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:57.424802 | orchestrator | 2026-01-05 00:53:57 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:53:57.427539 | orchestrator | 2026-01-05 00:53:57 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:53:57.427887 | orchestrator | 2026-01-05 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:00.476238 | orchestrator | 2026-01-05 00:54:00 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:00.479058 | orchestrator | 2026-01-05 00:54:00 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:00.479293 | orchestrator | 2026-01-05 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:03.511792 | orchestrator | 2026-01-05 00:54:03 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:03.513448 | orchestrator | 2026-01-05 00:54:03 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:03.513536 | orchestrator | 2026-01-05 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:06.556462 | orchestrator | 2026-01-05 00:54:06 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:06.559482 | orchestrator | 2026-01-05 00:54:06 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:06.559608 | orchestrator | 2026-01-05 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:09.595411 | orchestrator | 2026-01-05 00:54:09 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:09.597739 | orchestrator | 2026-01-05 00:54:09 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:09.597818 | orchestrator | 2026-01-05 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:12.639943 | orchestrator | 2026-01-05 00:54:12 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:12.642778 | orchestrator | 2026-01-05 00:54:12 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:12.642870 | orchestrator | 2026-01-05 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:15.677484 | orchestrator | 2026-01-05 00:54:15 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:15.677766 | orchestrator | 2026-01-05 00:54:15 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:15.677792 | orchestrator | 2026-01-05 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:18.714478 | orchestrator | 2026-01-05 00:54:18 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:18.717083 | orchestrator | 2026-01-05 00:54:18 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:18.717186 | orchestrator | 2026-01-05 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:21.764037 | orchestrator | 2026-01-05 00:54:21 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:21.765730 | orchestrator | 2026-01-05 00:54:21 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:21.765796 | orchestrator | 2026-01-05 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:24.813146 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:24.813879 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:24.813924 | orchestrator | 2026-01-05 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:27.851732 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:27.852057 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:27.852119 | orchestrator | 2026-01-05 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:30.891351 | orchestrator | 2026-01-05 00:54:30 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:30.892855 | orchestrator | 2026-01-05 00:54:30 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state STARTED 2026-01-05 00:54:30.892926 | orchestrator | 2026-01-05 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:33.923938 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:54:33.925400 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:33.928864 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:54:33.936167 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task 16df5c74-74d9-46b5-8b4e-92ee50eafd4f is in state SUCCESS 2026-01-05 00:54:33.938081 | orchestrator | 2026-01-05 00:54:33.938235 | orchestrator | 2026-01-05 00:54:33.938248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:54:33.938258 | orchestrator | 2026-01-05 00:54:33.938268 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:54:33.938278 | orchestrator | Monday 05 January 2026 00:47:40 +0000 (0:00:00.321) 0:00:00.321 ******** 2026-01-05 00:54:33.938287 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.938298 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.938343 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.938353 | orchestrator | 2026-01-05 00:54:33.938362 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:54:33.938371 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.424) 0:00:00.745 ******** 2026-01-05 00:54:33.938381 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-05 00:54:33.938391 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-05 00:54:33.938400 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-05 00:54:33.938455 | orchestrator | 2026-01-05 00:54:33.938465 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-05 00:54:33.938474 | orchestrator | 2026-01-05 00:54:33.938483 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 00:54:33.938491 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:00.971) 0:00:01.717 ******** 2026-01-05 00:54:33.938515 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.938524 | orchestrator | 2026-01-05 00:54:33.938532 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-05 00:54:33.938540 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:01.181) 0:00:02.898 ******** 2026-01-05 00:54:33.938549 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.938557 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.938565 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.938573 | orchestrator | 2026-01-05 00:54:33.938582 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-05 00:54:33.938590 | orchestrator | Monday 05 January 2026 00:47:44 +0000 (0:00:01.069) 0:00:03.968 ******** 2026-01-05 00:54:33.938623 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.938631 | orchestrator | 2026-01-05 00:54:33.938639 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-05 00:54:33.938646 | orchestrator | Monday 05 January 2026 00:47:45 +0000 (0:00:01.409) 0:00:05.377 ******** 2026-01-05 00:54:33.938654 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.938662 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.938669 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.938697 | orchestrator | 2026-01-05 00:54:33.938709 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-05 00:54:33.938729 | orchestrator | Monday 05 January 2026 00:47:46 +0000 (0:00:01.020) 0:00:06.397 ******** 2026-01-05 00:54:33.938786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:54:33.938866 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:54:33.938886 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:54:33.938903 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:54:33.938911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:54:33.938918 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:54:33.938928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:54:33.938956 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:54:33.938973 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:54:33.938980 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:54:33.938987 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:54:33.938994 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:54:33.939001 | orchestrator | 2026-01-05 00:54:33.939008 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 00:54:33.939015 | orchestrator | Monday 05 January 2026 00:47:51 +0000 (0:00:04.338) 0:00:10.736 ******** 2026-01-05 00:54:33.939023 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-05 00:54:33.939030 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-05 00:54:33.939101 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-05 00:54:33.939111 | orchestrator | 2026-01-05 00:54:33.939119 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 00:54:33.939128 | orchestrator | Monday 05 January 2026 00:47:52 +0000 (0:00:01.010) 0:00:11.747 ******** 2026-01-05 00:54:33.939136 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-05 00:54:33.939143 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-05 00:54:33.939151 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-05 00:54:33.939160 | orchestrator | 2026-01-05 00:54:33.939168 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 00:54:33.939202 | orchestrator | Monday 05 January 2026 00:47:54 +0000 (0:00:01.767) 0:00:13.514 ******** 2026-01-05 00:54:33.939213 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-05 00:54:33.939222 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.939284 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-05 00:54:33.939296 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.939306 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-05 00:54:33.939315 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.939360 | orchestrator | 2026-01-05 00:54:33.939370 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-05 00:54:33.939377 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:01.136) 0:00:14.651 ******** 2026-01-05 00:54:33.939389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.939405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.939564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.939664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.939676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.939686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.939708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.939717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.939725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.939742 | orchestrator | 2026-01-05 00:54:33.939750 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-05 00:54:33.939759 | orchestrator | Monday 05 January 2026 00:47:58 +0000 (0:00:03.123) 0:00:17.774 ******** 2026-01-05 00:54:33.939767 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.939777 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.939786 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.939793 | orchestrator | 2026-01-05 00:54:33.939802 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-05 00:54:33.939811 | orchestrator | Monday 05 January 2026 00:48:00 +0000 (0:00:01.668) 0:00:19.443 ******** 2026-01-05 00:54:33.939819 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-05 00:54:33.939828 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-05 00:54:33.939840 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-05 00:54:33.939848 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-05 00:54:33.939857 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-05 00:54:33.939865 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-05 00:54:33.939873 | orchestrator | 2026-01-05 00:54:33.939881 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-05 00:54:33.939890 | orchestrator | Monday 05 January 2026 00:48:03 +0000 (0:00:03.284) 0:00:22.727 ******** 2026-01-05 00:54:33.939898 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.939907 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.939913 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.939920 | orchestrator | 2026-01-05 00:54:33.939928 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-05 00:54:33.939935 | orchestrator | Monday 05 January 2026 00:48:05 +0000 (0:00:02.284) 0:00:25.012 ******** 2026-01-05 00:54:33.939944 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.939952 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.939960 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.939967 | orchestrator | 2026-01-05 00:54:33.940087 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-05 00:54:33.940096 | orchestrator | Monday 05 January 2026 00:48:09 +0000 (0:00:03.678) 0:00:28.691 ******** 2026-01-05 00:54:33.940106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.940126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.940136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.940213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:54:33.940228 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.940237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.940251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.940259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.940272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:54:33.940281 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.940288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.940302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.940310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.940321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:54:33.940329 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.940337 | orchestrator | 2026-01-05 00:54:33.940344 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-05 00:54:33.940353 | orchestrator | Monday 05 January 2026 00:48:11 +0000 (0:00:02.071) 0:00:30.762 ******** 2026-01-05 00:54:33.940360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.940416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:54:33.940425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.940441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:54:33.940456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.940460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508', '__omit_place_holder__e876bc065cf6d57b9964b155af783149c3f90508'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:54:33.940465 | orchestrator | 2026-01-05 00:54:33.940473 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-05 00:54:33.940478 | orchestrator | Monday 05 January 2026 00:48:15 +0000 (0:00:04.172) 0:00:34.934 ******** 2026-01-05 00:54:33.940483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.940528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.940534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.940538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.940547 | orchestrator | 2026-01-05 00:54:33.940551 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-05 00:54:33.940556 | orchestrator | Monday 05 January 2026 00:48:19 +0000 (0:00:03.701) 0:00:38.636 ******** 2026-01-05 00:54:33.940561 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:54:33.941482 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:54:33.941509 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:54:33.941514 | orchestrator | 2026-01-05 00:54:33.941518 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-05 00:54:33.941523 | orchestrator | Monday 05 January 2026 00:48:22 +0000 (0:00:03.695) 0:00:42.331 ******** 2026-01-05 00:54:33.941527 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:54:33.941532 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:54:33.941536 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:54:33.941540 | orchestrator | 2026-01-05 00:54:33.941544 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-05 00:54:33.941548 | orchestrator | Monday 05 January 2026 00:48:30 +0000 (0:00:07.402) 0:00:49.734 ******** 2026-01-05 00:54:33.941552 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.941557 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.941561 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.941565 | orchestrator | 2026-01-05 00:54:33.941569 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-05 00:54:33.941573 | orchestrator | Monday 05 January 2026 00:48:31 +0000 (0:00:00.997) 0:00:50.731 ******** 2026-01-05 00:54:33.941578 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:54:33.941584 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:54:33.941588 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:54:33.941592 | orchestrator | 2026-01-05 00:54:33.941616 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-05 00:54:33.941621 | orchestrator | Monday 05 January 2026 00:48:35 +0000 (0:00:04.099) 0:00:54.831 ******** 2026-01-05 00:54:33.941625 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:54:33.941630 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:54:33.941634 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:54:33.941638 | orchestrator | 2026-01-05 00:54:33.941642 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-05 00:54:33.941646 | orchestrator | Monday 05 January 2026 00:48:39 +0000 (0:00:04.501) 0:00:59.332 ******** 2026-01-05 00:54:33.941651 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-05 00:54:33.941659 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-05 00:54:33.941663 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-05 00:54:33.941668 | orchestrator | 2026-01-05 00:54:33.941672 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-05 00:54:33.941676 | orchestrator | Monday 05 January 2026 00:48:42 +0000 (0:00:02.099) 0:01:01.432 ******** 2026-01-05 00:54:33.941680 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-05 00:54:33.941692 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-05 00:54:33.941696 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-05 00:54:33.941700 | orchestrator | 2026-01-05 00:54:33.941704 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 00:54:33.941709 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:02.375) 0:01:03.807 ******** 2026-01-05 00:54:33.941713 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.941717 | orchestrator | 2026-01-05 00:54:33.941721 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-05 00:54:33.941725 | orchestrator | Monday 05 January 2026 00:48:45 +0000 (0:00:01.179) 0:01:04.987 ******** 2026-01-05 00:54:33.941745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.941757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.941761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.941766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.941774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.941782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.941786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.941791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.941799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.941803 | orchestrator | 2026-01-05 00:54:33.941808 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-05 00:54:33.941812 | orchestrator | Monday 05 January 2026 00:48:49 +0000 (0:00:03.604) 0:01:08.591 ******** 2026-01-05 00:54:33.941816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.941821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.941853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.941858 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.941862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.941867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.941876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.941880 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.941885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.941889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.941893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.941920 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.941928 | orchestrator | 2026-01-05 00:54:33.941942 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-05 00:54:33.941953 | orchestrator | Monday 05 January 2026 00:48:51 +0000 (0:00:01.953) 0:01:10.545 ******** 2026-01-05 00:54:33.941960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.941968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.941978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.941985 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.941992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942106 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.942115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942131 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.942136 | orchestrator | 2026-01-05 00:54:33.942141 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 00:54:33.942146 | orchestrator | Monday 05 January 2026 00:48:52 +0000 (0:00:01.122) 0:01:11.668 ******** 2026-01-05 00:54:33.942155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942196 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.942204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942219 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.942229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942248 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.942253 | orchestrator | 2026-01-05 00:54:33.942258 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 00:54:33.942262 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:00.783) 0:01:12.451 ******** 2026-01-05 00:54:33.942267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942287 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.942292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942313 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.942317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942334 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.942338 | orchestrator | 2026-01-05 00:54:33.942342 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 00:54:33.942346 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:00.556) 0:01:13.007 ******** 2026-01-05 00:54:33.942351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942371 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.942375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942391 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.942395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942415 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.942419 | orchestrator | 2026-01-05 00:54:33.942423 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-05 00:54:33.942427 | orchestrator | Monday 05 January 2026 00:48:54 +0000 (0:00:00.754) 0:01:13.762 ******** 2026-01-05 00:54:33.942432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942447 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.942452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942472 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.942477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942489 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.942493 | orchestrator | 2026-01-05 00:54:33.942501 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-05 00:54:33.942505 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:00.856) 0:01:14.619 ******** 2026-01-05 00:54:33.942509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942530 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.942535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942548 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.942552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942647 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.942651 | orchestrator | 2026-01-05 00:54:33.942656 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-05 00:54:33.942664 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:00.594) 0:01:15.214 ******** 2026-01-05 00:54:33.942669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942707 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.942730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942748 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.942756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:54:33.942761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:54:33.942765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:54:33.942770 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.942774 | orchestrator | 2026-01-05 00:54:33.942778 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-05 00:54:33.942782 | orchestrator | Monday 05 January 2026 00:48:56 +0000 (0:00:00.836) 0:01:16.050 ******** 2026-01-05 00:54:33.942787 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:54:33.942792 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:54:33.942796 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:54:33.942800 | orchestrator | 2026-01-05 00:54:33.942804 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-05 00:54:33.942808 | orchestrator | Monday 05 January 2026 00:48:58 +0000 (0:00:01.933) 0:01:17.983 ******** 2026-01-05 00:54:33.942817 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:54:33.942821 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:54:33.942828 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:54:33.942833 | orchestrator | 2026-01-05 00:54:33.942837 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-05 00:54:33.942841 | orchestrator | Monday 05 January 2026 00:49:00 +0000 (0:00:01.461) 0:01:19.444 ******** 2026-01-05 00:54:33.942845 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:54:33.942849 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:54:33.942854 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:54:33.942858 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.942862 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:54:33.942866 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.942870 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:54:33.942874 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:54:33.942878 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.942882 | orchestrator | 2026-01-05 00:54:33.942887 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-05 00:54:33.942891 | orchestrator | Monday 05 January 2026 00:49:01 +0000 (0:00:01.031) 0:01:20.476 ******** 2026-01-05 00:54:33.942898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.942903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.942909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:54:33.942917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.942934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.942942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:54:33.942949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.942960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.942966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:54:33.942973 | orchestrator | 2026-01-05 00:54:33.942980 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-05 00:54:33.942987 | orchestrator | Monday 05 January 2026 00:49:03 +0000 (0:00:02.752) 0:01:23.229 ******** 2026-01-05 00:54:33.942994 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.943000 | orchestrator | 2026-01-05 00:54:33.943004 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-05 00:54:33.943008 | orchestrator | Monday 05 January 2026 00:49:04 +0000 (0:00:00.641) 0:01:23.871 ******** 2026-01-05 00:54:33.943014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 00:54:33.943027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.943032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 00:54:33.943048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.943056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 00:54:33.943072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.943079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943092 | orchestrator | 2026-01-05 00:54:33.943096 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-05 00:54:33.943100 | orchestrator | Monday 05 January 2026 00:49:09 +0000 (0:00:04.890) 0:01:28.761 ******** 2026-01-05 00:54:33.943105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 00:54:33.943111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.943116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943124 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.943140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 00:54:33.943145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.943199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943221 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.943226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 00:54:33.943255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.943263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943294 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.943299 | orchestrator | 2026-01-05 00:54:33.943304 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-05 00:54:33.943308 | orchestrator | Monday 05 January 2026 00:49:10 +0000 (0:00:01.344) 0:01:30.106 ******** 2026-01-05 00:54:33.943313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:54:33.943319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:54:33.943324 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.943329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:54:33.943333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:54:33.943338 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.943342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:54:33.943349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:54:33.943354 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.943358 | orchestrator | 2026-01-05 00:54:33.943363 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-05 00:54:33.943367 | orchestrator | Monday 05 January 2026 00:49:11 +0000 (0:00:01.115) 0:01:31.221 ******** 2026-01-05 00:54:33.943371 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.943376 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.943380 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.943384 | orchestrator | 2026-01-05 00:54:33.943389 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-05 00:54:33.943393 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:01.337) 0:01:32.559 ******** 2026-01-05 00:54:33.943397 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.943401 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.943406 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.943410 | orchestrator | 2026-01-05 00:54:33.943414 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-05 00:54:33.943419 | orchestrator | Monday 05 January 2026 00:49:15 +0000 (0:00:01.908) 0:01:34.468 ******** 2026-01-05 00:54:33.943423 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.943427 | orchestrator | 2026-01-05 00:54:33.943431 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-05 00:54:33.943436 | orchestrator | Monday 05 January 2026 00:49:15 +0000 (0:00:00.822) 0:01:35.290 ******** 2026-01-05 00:54:33.943445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.943454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.943471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.943487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943501 | orchestrator | 2026-01-05 00:54:33.943505 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-05 00:54:33.943510 | orchestrator | Monday 05 January 2026 00:49:20 +0000 (0:00:04.452) 0:01:39.743 ******** 2026-01-05 00:54:33.943517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.943522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943539 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.943543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.943548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943560 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.943564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.943575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.943585 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.943589 | orchestrator | 2026-01-05 00:54:33.943594 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-05 00:54:33.943620 | orchestrator | Monday 05 January 2026 00:49:20 +0000 (0:00:00.571) 0:01:40.314 ******** 2026-01-05 00:54:33.943625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:54:33.943630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:54:33.943635 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.943640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:54:33.943644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:54:33.943648 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.943653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:54:33.943657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:54:33.943665 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.943670 | orchestrator | 2026-01-05 00:54:33.943674 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-05 00:54:33.943678 | orchestrator | Monday 05 January 2026 00:49:21 +0000 (0:00:01.036) 0:01:41.351 ******** 2026-01-05 00:54:33.943683 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.943687 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.943696 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.943700 | orchestrator | 2026-01-05 00:54:33.943704 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-05 00:54:33.943709 | orchestrator | Monday 05 January 2026 00:49:23 +0000 (0:00:01.312) 0:01:42.663 ******** 2026-01-05 00:54:33.943713 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.943717 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.943722 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.943726 | orchestrator | 2026-01-05 00:54:33.943731 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-05 00:54:33.943735 | orchestrator | Monday 05 January 2026 00:49:25 +0000 (0:00:01.897) 0:01:44.561 ******** 2026-01-05 00:54:33.943739 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.943744 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.943748 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.943752 | orchestrator | 2026-01-05 00:54:33.943757 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-05 00:54:33.943761 | orchestrator | Monday 05 January 2026 00:49:25 +0000 (0:00:00.292) 0:01:44.853 ******** 2026-01-05 00:54:33.943765 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.943770 | orchestrator | 2026-01-05 00:54:33.943774 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-05 00:54:33.943778 | orchestrator | Monday 05 January 2026 00:49:26 +0000 (0:00:00.780) 0:01:45.634 ******** 2026-01-05 00:54:33.943795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:54:33.943800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:54:33.943805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:54:33.943814 | orchestrator | 2026-01-05 00:54:33.943819 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-05 00:54:33.943823 | orchestrator | Monday 05 January 2026 00:49:28 +0000 (0:00:02.729) 0:01:48.363 ******** 2026-01-05 00:54:33.943831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:54:33.943835 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.943840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:54:33.943844 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.943883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:54:33.943921 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.943926 | orchestrator | 2026-01-05 00:54:33.943931 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-05 00:54:33.943935 | orchestrator | Monday 05 January 2026 00:49:30 +0000 (0:00:01.439) 0:01:49.803 ******** 2026-01-05 00:54:33.943942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:33.943948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:33.943959 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.943963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:33.944001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:33.944007 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.944012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:33.944016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:33.944020 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.944025 | orchestrator | 2026-01-05 00:54:33.944029 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-05 00:54:33.944033 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:01.633) 0:01:51.437 ******** 2026-01-05 00:54:33.944038 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.944042 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.944046 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.944051 | orchestrator | 2026-01-05 00:54:33.944055 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-05 00:54:33.944060 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:00.660) 0:01:52.098 ******** 2026-01-05 00:54:33.944064 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.944068 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.944073 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.944077 | orchestrator | 2026-01-05 00:54:33.944082 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-05 00:54:33.944090 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:01.171) 0:01:53.269 ******** 2026-01-05 00:54:33.944095 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.944100 | orchestrator | 2026-01-05 00:54:33.944104 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-05 00:54:33.944109 | orchestrator | Monday 05 January 2026 00:49:34 +0000 (0:00:00.711) 0:01:53.981 ******** 2026-01-05 00:54:33.944113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.944144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.944158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.944186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944215 | orchestrator | 2026-01-05 00:54:33.944220 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-05 00:54:33.944224 | orchestrator | Monday 05 January 2026 00:49:39 +0000 (0:00:04.816) 0:01:58.797 ******** 2026-01-05 00:54:33.944229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.944237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944253 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.944258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.944266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944283 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.944288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.944297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944315 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.944319 | orchestrator | 2026-01-05 00:54:33.944324 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-05 00:54:33.944328 | orchestrator | Monday 05 January 2026 00:49:40 +0000 (0:00:01.200) 0:01:59.998 ******** 2026-01-05 00:54:33.944335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:33.944340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:33.944345 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.944349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:33.944353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:33.944358 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.944362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:33.944366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:33.944371 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.944378 | orchestrator | 2026-01-05 00:54:33.944383 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-05 00:54:33.944387 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:01.213) 0:02:01.212 ******** 2026-01-05 00:54:33.944392 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.944396 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.944400 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.944405 | orchestrator | 2026-01-05 00:54:33.944409 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-05 00:54:33.944413 | orchestrator | Monday 05 January 2026 00:49:43 +0000 (0:00:01.468) 0:02:02.680 ******** 2026-01-05 00:54:33.944418 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.944429 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.944433 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.944438 | orchestrator | 2026-01-05 00:54:33.944445 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-05 00:54:33.944450 | orchestrator | Monday 05 January 2026 00:49:45 +0000 (0:00:02.125) 0:02:04.806 ******** 2026-01-05 00:54:33.944454 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.944459 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.944463 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.944467 | orchestrator | 2026-01-05 00:54:33.944472 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-05 00:54:33.944477 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:00.686) 0:02:05.492 ******** 2026-01-05 00:54:33.944481 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.944486 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.944490 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.944503 | orchestrator | 2026-01-05 00:54:33.944508 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-05 00:54:33.944512 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:00.413) 0:02:05.906 ******** 2026-01-05 00:54:33.944517 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.944521 | orchestrator | 2026-01-05 00:54:33.944526 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-05 00:54:33.944530 | orchestrator | Monday 05 January 2026 00:49:47 +0000 (0:00:00.807) 0:02:06.714 ******** 2026-01-05 00:54:33.944544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 00:54:33.944550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:33.944555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 00:54:33.944627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:33.944636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 00:54:33.944675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:33.944679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944712 | orchestrator | 2026-01-05 00:54:33.944757 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-05 00:54:33.944762 | orchestrator | Monday 05 January 2026 00:49:51 +0000 (0:00:04.520) 0:02:11.235 ******** 2026-01-05 00:54:33.944767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 00:54:33.944788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:33.944793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 00:54:33.944825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:33.944837 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.944842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944870 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.944878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 00:54:33.944883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:33.944888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.944920 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.944924 | orchestrator | 2026-01-05 00:54:33.944929 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-05 00:54:33.944933 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:01.117) 0:02:12.352 ******** 2026-01-05 00:54:33.944938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:33.944943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:33.944947 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.944952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:33.944956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:33.944965 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.944969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:33.944974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:33.944978 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.944982 | orchestrator | 2026-01-05 00:54:33.944987 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-05 00:54:33.944991 | orchestrator | Monday 05 January 2026 00:49:54 +0000 (0:00:01.277) 0:02:13.630 ******** 2026-01-05 00:54:33.944996 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.945000 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.945048 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.945053 | orchestrator | 2026-01-05 00:54:33.945058 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-05 00:54:33.945062 | orchestrator | Monday 05 January 2026 00:49:55 +0000 (0:00:01.655) 0:02:15.286 ******** 2026-01-05 00:54:33.945066 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.945071 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.945075 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.945079 | orchestrator | 2026-01-05 00:54:33.945084 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-05 00:54:33.945092 | orchestrator | Monday 05 January 2026 00:49:57 +0000 (0:00:01.826) 0:02:17.113 ******** 2026-01-05 00:54:33.945097 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.945101 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.945105 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.945110 | orchestrator | 2026-01-05 00:54:33.945114 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-05 00:54:33.945118 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:00.540) 0:02:17.654 ******** 2026-01-05 00:54:33.945122 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.945127 | orchestrator | 2026-01-05 00:54:33.945131 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-05 00:54:33.945135 | orchestrator | Monday 05 January 2026 00:49:59 +0000 (0:00:00.824) 0:02:18.478 ******** 2026-01-05 00:54:33.945148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:54:33.945163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.945169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:54:33.945522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.945549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:54:33.945559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.945568 | orchestrator | 2026-01-05 00:54:33.945573 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-05 00:54:33.945577 | orchestrator | Monday 05 January 2026 00:50:04 +0000 (0:00:05.775) 0:02:24.254 ******** 2026-01-05 00:54:33.945585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:54:33.945593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.945652 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.945660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:54:33.945669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.945677 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.945714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:54:33.945724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.945732 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.945737 | orchestrator | 2026-01-05 00:54:33.945741 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-05 00:54:33.945746 | orchestrator | Monday 05 January 2026 00:50:09 +0000 (0:00:04.355) 0:02:28.609 ******** 2026-01-05 00:54:33.945750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:33.945756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:33.945761 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.945765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:33.945773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:33.945778 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.945782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:33.945787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:33.945796 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.945800 | orchestrator | 2026-01-05 00:54:33.945804 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-05 00:54:33.945809 | orchestrator | Monday 05 January 2026 00:50:12 +0000 (0:00:03.060) 0:02:31.670 ******** 2026-01-05 00:54:33.945813 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.945818 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.945822 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.945826 | orchestrator | 2026-01-05 00:54:33.945831 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-05 00:54:33.945835 | orchestrator | Monday 05 January 2026 00:50:13 +0000 (0:00:01.327) 0:02:32.998 ******** 2026-01-05 00:54:33.945840 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.945844 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.945848 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.945853 | orchestrator | 2026-01-05 00:54:33.945860 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-05 00:54:33.945865 | orchestrator | Monday 05 January 2026 00:50:15 +0000 (0:00:02.210) 0:02:35.209 ******** 2026-01-05 00:54:33.945869 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.945873 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.945878 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.945882 | orchestrator | 2026-01-05 00:54:33.945886 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-05 00:54:33.945891 | orchestrator | Monday 05 January 2026 00:50:16 +0000 (0:00:00.643) 0:02:35.852 ******** 2026-01-05 00:54:33.945895 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.945899 | orchestrator | 2026-01-05 00:54:33.945904 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-05 00:54:33.945908 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:00.841) 0:02:36.693 ******** 2026-01-05 00:54:33.945913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 00:54:33.945918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 00:54:33.945927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 00:54:33.945935 | orchestrator | 2026-01-05 00:54:33.945939 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-05 00:54:33.945944 | orchestrator | Monday 05 January 2026 00:50:20 +0000 (0:00:03.370) 0:02:40.064 ******** 2026-01-05 00:54:33.945949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 00:54:33.945957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 00:54:33.945965 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.945973 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.945980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 00:54:33.945989 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.945996 | orchestrator | 2026-01-05 00:54:33.946005 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-05 00:54:33.946048 | orchestrator | Monday 05 January 2026 00:50:21 +0000 (0:00:00.607) 0:02:40.671 ******** 2026-01-05 00:54:33.946056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:33.946065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:33.946073 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.946079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:33.946087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:33.946094 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.946102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:33.946156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:33.946168 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.946176 | orchestrator | 2026-01-05 00:54:33.946184 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-05 00:54:33.946191 | orchestrator | Monday 05 January 2026 00:50:21 +0000 (0:00:00.622) 0:02:41.293 ******** 2026-01-05 00:54:33.946246 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.946296 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.946302 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.946307 | orchestrator | 2026-01-05 00:54:33.946312 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-05 00:54:33.946318 | orchestrator | Monday 05 January 2026 00:50:23 +0000 (0:00:01.260) 0:02:42.553 ******** 2026-01-05 00:54:33.946323 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.946329 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.946334 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.946339 | orchestrator | 2026-01-05 00:54:33.946345 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-05 00:54:33.946350 | orchestrator | Monday 05 January 2026 00:50:25 +0000 (0:00:01.952) 0:02:44.506 ******** 2026-01-05 00:54:33.946356 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.946361 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.946366 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.946371 | orchestrator | 2026-01-05 00:54:33.946376 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-05 00:54:33.946381 | orchestrator | Monday 05 January 2026 00:50:25 +0000 (0:00:00.474) 0:02:44.981 ******** 2026-01-05 00:54:33.946387 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.946392 | orchestrator | 2026-01-05 00:54:33.946397 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-05 00:54:33.946401 | orchestrator | Monday 05 January 2026 00:50:26 +0000 (0:00:00.912) 0:02:45.893 ******** 2026-01-05 00:54:33.946417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:54:33.946434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:54:33.946446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:54:33.946472 | orchestrator | 2026-01-05 00:54:33.946478 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-05 00:54:33.946484 | orchestrator | Monday 05 January 2026 00:50:29 +0000 (0:00:03.506) 0:02:49.399 ******** 2026-01-05 00:54:33.946496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:54:33.946502 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.946511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:54:33.946520 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.946529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:54:33.946534 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.946538 | orchestrator | 2026-01-05 00:54:33.946543 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-05 00:54:33.946548 | orchestrator | Monday 05 January 2026 00:50:31 +0000 (0:00:01.022) 0:02:50.421 ******** 2026-01-05 00:54:33.946557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:54:33.946564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:54:33.946571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:54:33.946577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:54:33.946585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:54:33.946591 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.946621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:54:33.946629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:54:33.946634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:54:33.946639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:54:33.946644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:54:33.946652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:54:33.946657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:54:33.946666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:54:33.946671 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.946676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:54:33.946681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:54:33.946685 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.946690 | orchestrator | 2026-01-05 00:54:33.946694 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-05 00:54:33.946699 | orchestrator | Monday 05 January 2026 00:50:31 +0000 (0:00:00.870) 0:02:51.292 ******** 2026-01-05 00:54:33.946703 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.946708 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.946713 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.946717 | orchestrator | 2026-01-05 00:54:33.946722 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-05 00:54:33.946726 | orchestrator | Monday 05 January 2026 00:50:33 +0000 (0:00:01.253) 0:02:52.546 ******** 2026-01-05 00:54:33.946731 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.946736 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.946740 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.946745 | orchestrator | 2026-01-05 00:54:33.946749 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-05 00:54:33.946754 | orchestrator | Monday 05 January 2026 00:50:35 +0000 (0:00:02.609) 0:02:55.155 ******** 2026-01-05 00:54:33.946759 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.946763 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.946768 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.946772 | orchestrator | 2026-01-05 00:54:33.946777 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-05 00:54:33.946784 | orchestrator | Monday 05 January 2026 00:50:36 +0000 (0:00:00.341) 0:02:55.496 ******** 2026-01-05 00:54:33.946789 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.946793 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.946798 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.946802 | orchestrator | 2026-01-05 00:54:33.946807 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-05 00:54:33.946812 | orchestrator | Monday 05 January 2026 00:50:36 +0000 (0:00:00.576) 0:02:56.072 ******** 2026-01-05 00:54:33.946816 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.946821 | orchestrator | 2026-01-05 00:54:33.946826 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-05 00:54:33.946830 | orchestrator | Monday 05 January 2026 00:50:37 +0000 (0:00:01.213) 0:02:57.286 ******** 2026-01-05 00:54:33.946835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 00:54:33.946850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:54:33.946856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:54:33.946862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 00:54:33.946870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:54:33.946875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 00:54:33.946886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:54:33.946916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:54:33.946921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:54:33.946926 | orchestrator | 2026-01-05 00:54:33.946930 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-05 00:54:33.946935 | orchestrator | Monday 05 January 2026 00:50:41 +0000 (0:00:04.042) 0:03:01.329 ******** 2026-01-05 00:54:33.946944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 00:54:33.946949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:54:33.946959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:54:33.946964 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.946972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 00:54:33.946978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:54:33.946983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:54:33.946987 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.946993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 00:54:33.947002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:54:33.947013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:54:33.947018 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.947023 | orchestrator | 2026-01-05 00:54:33.947044 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-05 00:54:33.947049 | orchestrator | Monday 05 January 2026 00:50:43 +0000 (0:00:01.467) 0:03:02.797 ******** 2026-01-05 00:54:33.947054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:54:33.947059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:54:33.947064 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.947084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:54:33.947090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:54:33.947095 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.947099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:54:33.947106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:54:33.947112 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.947116 | orchestrator | 2026-01-05 00:54:33.947121 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-05 00:54:33.947178 | orchestrator | Monday 05 January 2026 00:50:44 +0000 (0:00:01.054) 0:03:03.852 ******** 2026-01-05 00:54:33.947220 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.947230 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.947259 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.947268 | orchestrator | 2026-01-05 00:54:33.947276 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-05 00:54:33.947284 | orchestrator | Monday 05 January 2026 00:50:45 +0000 (0:00:01.384) 0:03:05.236 ******** 2026-01-05 00:54:33.947292 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.947301 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.947306 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.947310 | orchestrator | 2026-01-05 00:54:33.947315 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-05 00:54:33.947319 | orchestrator | Monday 05 January 2026 00:50:48 +0000 (0:00:02.230) 0:03:07.467 ******** 2026-01-05 00:54:33.947324 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.947328 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.947333 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.947338 | orchestrator | 2026-01-05 00:54:33.947342 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-05 00:54:33.947347 | orchestrator | Monday 05 January 2026 00:50:48 +0000 (0:00:00.576) 0:03:08.044 ******** 2026-01-05 00:54:33.947351 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.947356 | orchestrator | 2026-01-05 00:54:33.947360 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-05 00:54:33.947365 | orchestrator | Monday 05 January 2026 00:50:49 +0000 (0:00:00.995) 0:03:09.040 ******** 2026-01-05 00:54:33.948149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 00:54:33.948176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 00:54:33.948200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 00:54:33.948218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948223 | orchestrator | 2026-01-05 00:54:33.948228 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-05 00:54:33.948232 | orchestrator | Monday 05 January 2026 00:50:53 +0000 (0:00:03.661) 0:03:12.701 ******** 2026-01-05 00:54:33.948238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 00:54:33.948247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948254 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.948259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 00:54:33.948264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948269 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.948279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 00:54:33.948287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948299 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.948306 | orchestrator | 2026-01-05 00:54:33.948314 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-05 00:54:33.948321 | orchestrator | Monday 05 January 2026 00:50:54 +0000 (0:00:01.027) 0:03:13.728 ******** 2026-01-05 00:54:33.948329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:54:33.948338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:54:33.948347 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.948358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:54:33.948366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:54:33.948374 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.948381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:54:33.948389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:54:33.948397 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.948405 | orchestrator | 2026-01-05 00:54:33.948412 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-05 00:54:33.948420 | orchestrator | Monday 05 January 2026 00:50:55 +0000 (0:00:00.955) 0:03:14.683 ******** 2026-01-05 00:54:33.948429 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.948437 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.948445 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.948452 | orchestrator | 2026-01-05 00:54:33.948460 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-05 00:54:33.948467 | orchestrator | Monday 05 January 2026 00:50:56 +0000 (0:00:01.333) 0:03:16.017 ******** 2026-01-05 00:54:33.948475 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.948482 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.948489 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.948497 | orchestrator | 2026-01-05 00:54:33.948504 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-05 00:54:33.948512 | orchestrator | Monday 05 January 2026 00:50:58 +0000 (0:00:02.286) 0:03:18.304 ******** 2026-01-05 00:54:33.948521 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.948528 | orchestrator | 2026-01-05 00:54:33.948535 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-05 00:54:33.948540 | orchestrator | Monday 05 January 2026 00:51:00 +0000 (0:00:01.316) 0:03:19.620 ******** 2026-01-05 00:54:33.948550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 00:54:33.948563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 00:54:33.948590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 00:54:33.948625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948659 | orchestrator | 2026-01-05 00:54:33.948664 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-05 00:54:33.948668 | orchestrator | Monday 05 January 2026 00:51:04 +0000 (0:00:04.171) 0:03:23.792 ******** 2026-01-05 00:54:33.948673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 00:54:33.948678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948698 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.948703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 00:54:33.948717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 00:54:33.948737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948742 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.948748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.948773 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.948779 | orchestrator | 2026-01-05 00:54:33.948784 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-05 00:54:33.948790 | orchestrator | Monday 05 January 2026 00:51:05 +0000 (0:00:00.776) 0:03:24.569 ******** 2026-01-05 00:54:33.948795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:54:33.948801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:54:33.948806 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.948812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:54:33.948817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:54:33.948823 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.948828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:54:33.948833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:54:33.948838 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.948844 | orchestrator | 2026-01-05 00:54:33.948849 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-05 00:54:33.948854 | orchestrator | Monday 05 January 2026 00:51:06 +0000 (0:00:01.411) 0:03:25.980 ******** 2026-01-05 00:54:33.948860 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.948865 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.948870 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.948875 | orchestrator | 2026-01-05 00:54:33.948881 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-05 00:54:33.948890 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:01.399) 0:03:27.380 ******** 2026-01-05 00:54:33.948895 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.948901 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.948906 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.948911 | orchestrator | 2026-01-05 00:54:33.948916 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-05 00:54:33.948922 | orchestrator | Monday 05 January 2026 00:51:10 +0000 (0:00:02.206) 0:03:29.586 ******** 2026-01-05 00:54:33.948927 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.948933 | orchestrator | 2026-01-05 00:54:33.948938 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-05 00:54:33.948947 | orchestrator | Monday 05 January 2026 00:51:11 +0000 (0:00:01.410) 0:03:30.996 ******** 2026-01-05 00:54:33.948953 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:54:33.948959 | orchestrator | 2026-01-05 00:54:33.948965 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-05 00:54:33.948970 | orchestrator | Monday 05 January 2026 00:51:14 +0000 (0:00:02.882) 0:03:33.879 ******** 2026-01-05 00:54:33.948979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:54:33.948985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:54:33.948990 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.948999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:54:33.949012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:54:33.949018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:54:33.949023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:54:33.949030 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949038 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949043 | orchestrator | 2026-01-05 00:54:33.949047 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-05 00:54:33.949052 | orchestrator | Monday 05 January 2026 00:51:16 +0000 (0:00:02.454) 0:03:36.333 ******** 2026-01-05 00:54:33.949059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:54:33.949065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:54:33.949070 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:54:33.949087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:54:33.949092 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:54:33.949105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:54:33.949109 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.949114 | orchestrator | 2026-01-05 00:54:33.949118 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-05 00:54:33.949127 | orchestrator | Monday 05 January 2026 00:51:20 +0000 (0:00:03.110) 0:03:39.444 ******** 2026-01-05 00:54:33.949134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:54:33.949140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:54:33.949145 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:54:33.949158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:54:33.949163 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:54:33.949172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:54:33.949177 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.949182 | orchestrator | 2026-01-05 00:54:33.949190 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-05 00:54:33.949194 | orchestrator | Monday 05 January 2026 00:51:23 +0000 (0:00:03.510) 0:03:42.955 ******** 2026-01-05 00:54:33.949199 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.949203 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.949208 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.949212 | orchestrator | 2026-01-05 00:54:33.949216 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-05 00:54:33.949221 | orchestrator | Monday 05 January 2026 00:51:25 +0000 (0:00:01.894) 0:03:44.850 ******** 2026-01-05 00:54:33.949226 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949230 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949235 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.949239 | orchestrator | 2026-01-05 00:54:33.949244 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-05 00:54:33.949248 | orchestrator | Monday 05 January 2026 00:51:27 +0000 (0:00:01.732) 0:03:46.582 ******** 2026-01-05 00:54:33.949255 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949260 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949264 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.949269 | orchestrator | 2026-01-05 00:54:33.949273 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-05 00:54:33.949278 | orchestrator | Monday 05 January 2026 00:51:27 +0000 (0:00:00.365) 0:03:46.948 ******** 2026-01-05 00:54:33.949282 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.949287 | orchestrator | 2026-01-05 00:54:33.949291 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-05 00:54:33.949296 | orchestrator | Monday 05 January 2026 00:51:29 +0000 (0:00:01.615) 0:03:48.563 ******** 2026-01-05 00:54:33.949301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:54:33.949310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:54:33.949315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:54:33.949323 | orchestrator | 2026-01-05 00:54:33.949328 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-05 00:54:33.949332 | orchestrator | Monday 05 January 2026 00:51:30 +0000 (0:00:01.534) 0:03:50.098 ******** 2026-01-05 00:54:33.949337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:54:33.949342 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:54:33.949354 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:54:33.949363 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.949368 | orchestrator | 2026-01-05 00:54:33.949372 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-05 00:54:33.949377 | orchestrator | Monday 05 January 2026 00:51:31 +0000 (0:00:00.475) 0:03:50.574 ******** 2026-01-05 00:54:33.949382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:54:33.949390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:54:33.949394 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949399 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:54:33.949412 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.949417 | orchestrator | 2026-01-05 00:54:33.949421 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-05 00:54:33.949426 | orchestrator | Monday 05 January 2026 00:51:32 +0000 (0:00:01.010) 0:03:51.585 ******** 2026-01-05 00:54:33.949430 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949435 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949439 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.949444 | orchestrator | 2026-01-05 00:54:33.949449 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-05 00:54:33.949453 | orchestrator | Monday 05 January 2026 00:51:32 +0000 (0:00:00.493) 0:03:52.078 ******** 2026-01-05 00:54:33.949458 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949462 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949467 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.949472 | orchestrator | 2026-01-05 00:54:33.949480 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-05 00:54:33.949487 | orchestrator | Monday 05 January 2026 00:51:34 +0000 (0:00:01.524) 0:03:53.603 ******** 2026-01-05 00:54:33.949494 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.949501 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.949509 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.949516 | orchestrator | 2026-01-05 00:54:33.949523 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-05 00:54:33.949531 | orchestrator | Monday 05 January 2026 00:51:34 +0000 (0:00:00.327) 0:03:53.930 ******** 2026-01-05 00:54:33.949539 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.949546 | orchestrator | 2026-01-05 00:54:33.949553 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-05 00:54:33.949559 | orchestrator | Monday 05 January 2026 00:51:36 +0000 (0:00:01.574) 0:03:55.505 ******** 2026-01-05 00:54:33.949570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 00:54:33.949577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:54:33.949632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.949775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 00:54:33.949789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 00:54:33.949808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.949850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.949859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:54:33.949883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:54:33.949888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.949944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.949949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.949964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.949986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.950063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.950069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.950076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.950086 | orchestrator | 2026-01-05 00:54:33.950090 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-05 00:54:33.950095 | orchestrator | Monday 05 January 2026 00:51:40 +0000 (0:00:04.352) 0:03:59.857 ******** 2026-01-05 00:54:33.950100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 00:54:33.950107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:54:33.950133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 00:54:33.950138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:54:33.950204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.950209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.950268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.950276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.950286 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.950291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.950347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.950352 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.950357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 00:54:33.950365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:54:33.950393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.950427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:54:33.950455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.950463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.950470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:54:33.950476 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.950481 | orchestrator | 2026-01-05 00:54:33.950486 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-05 00:54:33.950492 | orchestrator | Monday 05 January 2026 00:51:41 +0000 (0:00:01.406) 0:04:01.264 ******** 2026-01-05 00:54:33.950498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:54:33.950504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:54:33.950510 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.950518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:54:33.950524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:54:33.950529 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.950534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:54:33.950543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:54:33.950548 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.950554 | orchestrator | 2026-01-05 00:54:33.950559 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-05 00:54:33.950564 | orchestrator | Monday 05 January 2026 00:51:44 +0000 (0:00:02.381) 0:04:03.645 ******** 2026-01-05 00:54:33.950569 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.950575 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.950583 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.950591 | orchestrator | 2026-01-05 00:54:33.950651 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-05 00:54:33.950659 | orchestrator | Monday 05 January 2026 00:51:45 +0000 (0:00:01.328) 0:04:04.974 ******** 2026-01-05 00:54:33.950666 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.950674 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.950681 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.950689 | orchestrator | 2026-01-05 00:54:33.950696 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-05 00:54:33.950703 | orchestrator | Monday 05 January 2026 00:51:47 +0000 (0:00:02.203) 0:04:07.177 ******** 2026-01-05 00:54:33.950725 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.950732 | orchestrator | 2026-01-05 00:54:33.950740 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-05 00:54:33.950747 | orchestrator | Monday 05 January 2026 00:51:49 +0000 (0:00:01.291) 0:04:08.468 ******** 2026-01-05 00:54:33.950761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.950771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.950785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.950804 | orchestrator | 2026-01-05 00:54:33.950811 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-05 00:54:33.950818 | orchestrator | Monday 05 January 2026 00:51:52 +0000 (0:00:03.777) 0:04:12.246 ******** 2026-01-05 00:54:33.950825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.950834 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.950846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.950855 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.950864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.950873 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.950880 | orchestrator | 2026-01-05 00:54:33.950885 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-05 00:54:33.950895 | orchestrator | Monday 05 January 2026 00:51:53 +0000 (0:00:00.542) 0:04:12.788 ******** 2026-01-05 00:54:33.950900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:54:33.950906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:54:33.950911 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.950922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:54:33.950927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:54:33.950932 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.950937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:54:33.950942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:54:33.950947 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.950952 | orchestrator | 2026-01-05 00:54:33.950957 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-05 00:54:33.950962 | orchestrator | Monday 05 January 2026 00:51:54 +0000 (0:00:00.831) 0:04:13.619 ******** 2026-01-05 00:54:33.950967 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.950973 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.950978 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.950982 | orchestrator | 2026-01-05 00:54:33.950988 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-05 00:54:33.950993 | orchestrator | Monday 05 January 2026 00:51:55 +0000 (0:00:01.300) 0:04:14.920 ******** 2026-01-05 00:54:33.950998 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.951003 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.951008 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.951013 | orchestrator | 2026-01-05 00:54:33.951018 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-05 00:54:33.951024 | orchestrator | Monday 05 January 2026 00:51:57 +0000 (0:00:01.942) 0:04:16.863 ******** 2026-01-05 00:54:33.951029 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.951034 | orchestrator | 2026-01-05 00:54:33.951039 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-05 00:54:33.951044 | orchestrator | Monday 05 January 2026 00:51:59 +0000 (0:00:01.588) 0:04:18.452 ******** 2026-01-05 00:54:33.951053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.951064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.951086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.951104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951125 | orchestrator | 2026-01-05 00:54:33.951130 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-05 00:54:33.951135 | orchestrator | Monday 05 January 2026 00:52:03 +0000 (0:00:04.365) 0:04:22.817 ******** 2026-01-05 00:54:33.951144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.951153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951163 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.951173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.951179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951193 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.951202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.951208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.951222 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.951227 | orchestrator | 2026-01-05 00:54:33.951232 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-05 00:54:33.951237 | orchestrator | Monday 05 January 2026 00:52:04 +0000 (0:00:01.296) 0:04:24.113 ******** 2026-01-05 00:54:33.951243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951265 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.951274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951309 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.951314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:54:33.951324 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.951329 | orchestrator | 2026-01-05 00:54:33.951334 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-05 00:54:33.951339 | orchestrator | Monday 05 January 2026 00:52:05 +0000 (0:00:00.947) 0:04:25.060 ******** 2026-01-05 00:54:33.951344 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.951349 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.951354 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.951359 | orchestrator | 2026-01-05 00:54:33.951365 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-05 00:54:33.951370 | orchestrator | Monday 05 January 2026 00:52:06 +0000 (0:00:01.295) 0:04:26.356 ******** 2026-01-05 00:54:33.951375 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.951380 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.951385 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.951390 | orchestrator | 2026-01-05 00:54:33.951398 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-05 00:54:33.951404 | orchestrator | Monday 05 January 2026 00:52:09 +0000 (0:00:02.132) 0:04:28.489 ******** 2026-01-05 00:54:33.951409 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.951414 | orchestrator | 2026-01-05 00:54:33.951419 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-05 00:54:33.951424 | orchestrator | Monday 05 January 2026 00:52:10 +0000 (0:00:01.623) 0:04:30.112 ******** 2026-01-05 00:54:33.951429 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-05 00:54:33.951435 | orchestrator | 2026-01-05 00:54:33.951441 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-05 00:54:33.951446 | orchestrator | Monday 05 January 2026 00:52:11 +0000 (0:00:00.828) 0:04:30.941 ******** 2026-01-05 00:54:33.951456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:54:33.951463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:54:33.951468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:54:33.951474 | orchestrator | 2026-01-05 00:54:33.951482 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-05 00:54:33.951487 | orchestrator | Monday 05 January 2026 00:52:15 +0000 (0:00:04.420) 0:04:35.362 ******** 2026-01-05 00:54:33.951492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:54:33.951498 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.951503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:54:33.951508 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.951514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:54:33.951519 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.951524 | orchestrator | 2026-01-05 00:54:33.951533 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-05 00:54:33.951538 | orchestrator | Monday 05 January 2026 00:52:17 +0000 (0:00:01.760) 0:04:37.123 ******** 2026-01-05 00:54:33.951543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:54:33.951554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:54:33.951560 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.951565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:54:33.951571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:54:33.951576 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.951581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:54:33.951587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:54:33.951592 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.951616 | orchestrator | 2026-01-05 00:54:33.951622 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:54:33.951627 | orchestrator | Monday 05 January 2026 00:52:19 +0000 (0:00:01.626) 0:04:38.750 ******** 2026-01-05 00:54:33.951632 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.951638 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.951643 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.951648 | orchestrator | 2026-01-05 00:54:33.951653 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:54:33.951658 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:02.681) 0:04:41.432 ******** 2026-01-05 00:54:33.951663 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.951668 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.951673 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.951678 | orchestrator | 2026-01-05 00:54:33.951686 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-05 00:54:33.951692 | orchestrator | Monday 05 January 2026 00:52:25 +0000 (0:00:03.153) 0:04:44.586 ******** 2026-01-05 00:54:33.951697 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-05 00:54:33.951703 | orchestrator | 2026-01-05 00:54:33.951708 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-05 00:54:33.951713 | orchestrator | Monday 05 January 2026 00:52:26 +0000 (0:00:01.615) 0:04:46.201 ******** 2026-01-05 00:54:33.951718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:54:33.951724 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.951729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:54:33.951738 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.951747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:54:33.951753 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.951758 | orchestrator | 2026-01-05 00:54:33.951763 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-05 00:54:33.951768 | orchestrator | Monday 05 January 2026 00:52:28 +0000 (0:00:01.294) 0:04:47.495 ******** 2026-01-05 00:54:33.951774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:54:33.951779 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.951784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:54:33.951790 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.951795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:54:33.951803 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.951808 | orchestrator | 2026-01-05 00:54:33.951813 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-05 00:54:33.951819 | orchestrator | Monday 05 January 2026 00:52:29 +0000 (0:00:01.282) 0:04:48.777 ******** 2026-01-05 00:54:33.951824 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.951829 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.951834 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.951839 | orchestrator | 2026-01-05 00:54:33.951844 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:54:33.951849 | orchestrator | Monday 05 January 2026 00:52:31 +0000 (0:00:01.698) 0:04:50.476 ******** 2026-01-05 00:54:33.951854 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.951860 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.951869 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.951874 | orchestrator | 2026-01-05 00:54:33.951879 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:54:33.951884 | orchestrator | Monday 05 January 2026 00:52:34 +0000 (0:00:03.062) 0:04:53.538 ******** 2026-01-05 00:54:33.951889 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.951894 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.951899 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.951904 | orchestrator | 2026-01-05 00:54:33.951909 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-05 00:54:33.951915 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:02.793) 0:04:56.331 ******** 2026-01-05 00:54:33.951920 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-05 00:54:33.951925 | orchestrator | 2026-01-05 00:54:33.951930 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-05 00:54:33.951935 | orchestrator | Monday 05 January 2026 00:52:37 +0000 (0:00:00.898) 0:04:57.230 ******** 2026-01-05 00:54:33.951944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:54:33.951949 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.951955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:54:33.951960 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.951965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:54:33.951970 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.951976 | orchestrator | 2026-01-05 00:54:33.951981 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-05 00:54:33.951986 | orchestrator | Monday 05 January 2026 00:52:39 +0000 (0:00:01.386) 0:04:58.617 ******** 2026-01-05 00:54:33.951991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:54:33.951996 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.952004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:54:33.952013 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.952019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:54:33.952024 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.952029 | orchestrator | 2026-01-05 00:54:33.952034 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-05 00:54:33.952039 | orchestrator | Monday 05 January 2026 00:52:40 +0000 (0:00:01.421) 0:05:00.038 ******** 2026-01-05 00:54:33.952044 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.952049 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.952054 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.952059 | orchestrator | 2026-01-05 00:54:33.952065 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:54:33.952070 | orchestrator | Monday 05 January 2026 00:52:42 +0000 (0:00:01.651) 0:05:01.689 ******** 2026-01-05 00:54:33.952075 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.952080 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.952085 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.952090 | orchestrator | 2026-01-05 00:54:33.952096 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:54:33.952101 | orchestrator | Monday 05 January 2026 00:52:44 +0000 (0:00:02.442) 0:05:04.132 ******** 2026-01-05 00:54:33.952106 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.952111 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.952116 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.952121 | orchestrator | 2026-01-05 00:54:33.952126 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-05 00:54:33.952131 | orchestrator | Monday 05 January 2026 00:52:48 +0000 (0:00:03.428) 0:05:07.560 ******** 2026-01-05 00:54:33.952139 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.952145 | orchestrator | 2026-01-05 00:54:33.952150 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-05 00:54:33.952155 | orchestrator | Monday 05 January 2026 00:52:49 +0000 (0:00:01.651) 0:05:09.212 ******** 2026-01-05 00:54:33.952161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.952167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:54:33.952179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.952186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:54:33.952206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.952211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.952235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.952366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:54:33.952376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.952398 | orchestrator | 2026-01-05 00:54:33.952404 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-05 00:54:33.952409 | orchestrator | Monday 05 January 2026 00:52:52 +0000 (0:00:03.150) 0:05:12.363 ******** 2026-01-05 00:54:33.952418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.952425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:54:33.952465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.952488 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.952497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.952503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:54:33.952508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.952546 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.952552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.952557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:54:33.952565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:54:33.952591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:33.952643 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.952649 | orchestrator | 2026-01-05 00:54:33.952654 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-05 00:54:33.952664 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:00.686) 0:05:13.049 ******** 2026-01-05 00:54:33.952670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:54:33.952675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:54:33.952681 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.952686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:54:33.952691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:54:33.952696 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.952701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:54:33.952706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:54:33.952711 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.952716 | orchestrator | 2026-01-05 00:54:33.952721 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-05 00:54:33.952726 | orchestrator | Monday 05 January 2026 00:52:54 +0000 (0:00:01.314) 0:05:14.363 ******** 2026-01-05 00:54:33.952731 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.952736 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.952742 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.952747 | orchestrator | 2026-01-05 00:54:33.952752 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-05 00:54:33.952757 | orchestrator | Monday 05 January 2026 00:52:56 +0000 (0:00:01.328) 0:05:15.692 ******** 2026-01-05 00:54:33.952762 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.952767 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.952772 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.952777 | orchestrator | 2026-01-05 00:54:33.952783 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-05 00:54:33.952791 | orchestrator | Monday 05 January 2026 00:52:58 +0000 (0:00:02.100) 0:05:17.792 ******** 2026-01-05 00:54:33.952796 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.952801 | orchestrator | 2026-01-05 00:54:33.952806 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-05 00:54:33.952811 | orchestrator | Monday 05 January 2026 00:52:59 +0000 (0:00:01.357) 0:05:19.150 ******** 2026-01-05 00:54:33.952817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:54:33.952846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:54:33.952853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:54:33.952859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:54:33.952868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:54:33.952889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:54:33.952900 | orchestrator | 2026-01-05 00:54:33.952905 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-05 00:54:33.952910 | orchestrator | Monday 05 January 2026 00:53:05 +0000 (0:00:05.582) 0:05:24.733 ******** 2026-01-05 00:54:33.952916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:54:33.952921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:54:33.952930 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.952935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:54:33.952959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:54:33.952966 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.952973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:54:33.952979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:54:33.952986 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.952992 | orchestrator | 2026-01-05 00:54:33.952998 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-05 00:54:33.953007 | orchestrator | Monday 05 January 2026 00:53:06 +0000 (0:00:00.670) 0:05:25.404 ******** 2026-01-05 00:54:33.953014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 00:54:33.953020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:54:33.953030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:54:33.953037 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.953043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 00:54:33.953049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:54:33.953055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:54:33.953061 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.953071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 00:54:33.953094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:54:33.953101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:54:33.953108 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.953114 | orchestrator | 2026-01-05 00:54:33.953120 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-05 00:54:33.953126 | orchestrator | Monday 05 January 2026 00:53:06 +0000 (0:00:00.974) 0:05:26.378 ******** 2026-01-05 00:54:33.953132 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.953138 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.953144 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.953150 | orchestrator | 2026-01-05 00:54:33.953156 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-05 00:54:33.953162 | orchestrator | Monday 05 January 2026 00:53:07 +0000 (0:00:00.867) 0:05:27.246 ******** 2026-01-05 00:54:33.953168 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.953174 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.953180 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.953186 | orchestrator | 2026-01-05 00:54:33.953192 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-05 00:54:33.953198 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:01.553) 0:05:28.799 ******** 2026-01-05 00:54:33.953204 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.953210 | orchestrator | 2026-01-05 00:54:33.953216 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-05 00:54:33.953222 | orchestrator | Monday 05 January 2026 00:53:10 +0000 (0:00:01.490) 0:05:30.290 ******** 2026-01-05 00:54:33.953229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 00:54:33.953261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:54:33.953268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 00:54:33.953309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:54:33.953314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 00:54:33.953338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:54:33.953365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 00:54:33.953394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:54:33.953403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 00:54:33.953450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:54:33.953456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 00:54:33.953478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:54:33.953496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953512 | orchestrator | 2026-01-05 00:54:33.953520 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-05 00:54:33.953525 | orchestrator | Monday 05 January 2026 00:53:15 +0000 (0:00:04.496) 0:05:34.787 ******** 2026-01-05 00:54:33.953531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 00:54:33.953540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:54:33.953545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 00:54:33.953573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:54:33.953583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953618 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.953623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 00:54:33.953629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:54:33.953637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 00:54:33.953653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:54:33.953667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 00:54:33.953687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:54:33.953701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 00:54:33.953721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:54:33.953739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953751 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.953760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:54:33.953766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:54:33.953771 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.953776 | orchestrator | 2026-01-05 00:54:33.953781 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-05 00:54:33.953786 | orchestrator | Monday 05 January 2026 00:53:16 +0000 (0:00:01.318) 0:05:36.105 ******** 2026-01-05 00:54:33.953791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 00:54:33.953797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 00:54:33.953802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:54:33.953808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:54:33.953818 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.953826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 00:54:33.953831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 00:54:33.953837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:54:33.953842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:54:33.953847 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.953852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 00:54:33.953857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 00:54:33.953863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:54:33.953868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:54:33.953873 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.953879 | orchestrator | 2026-01-05 00:54:33.953884 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-05 00:54:33.953892 | orchestrator | Monday 05 January 2026 00:53:17 +0000 (0:00:01.098) 0:05:37.203 ******** 2026-01-05 00:54:33.953897 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.953902 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.953907 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.953912 | orchestrator | 2026-01-05 00:54:33.953917 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-05 00:54:33.953922 | orchestrator | Monday 05 January 2026 00:53:18 +0000 (0:00:00.458) 0:05:37.662 ******** 2026-01-05 00:54:33.953927 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.953932 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.953937 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.953942 | orchestrator | 2026-01-05 00:54:33.953947 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-05 00:54:33.953952 | orchestrator | Monday 05 January 2026 00:53:19 +0000 (0:00:01.535) 0:05:39.197 ******** 2026-01-05 00:54:33.953957 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.953962 | orchestrator | 2026-01-05 00:54:33.953967 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-05 00:54:33.953976 | orchestrator | Monday 05 January 2026 00:53:21 +0000 (0:00:01.865) 0:05:41.063 ******** 2026-01-05 00:54:33.953984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:33.953990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:33.953996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:33.954001 | orchestrator | 2026-01-05 00:54:33.954007 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-05 00:54:33.954041 | orchestrator | Monday 05 January 2026 00:53:24 +0000 (0:00:02.512) 0:05:43.575 ******** 2026-01-05 00:54:33.954051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:54:33.954064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:54:33.954070 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954076 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:54:33.954086 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954091 | orchestrator | 2026-01-05 00:54:33.954097 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-05 00:54:33.954102 | orchestrator | Monday 05 January 2026 00:53:24 +0000 (0:00:00.472) 0:05:44.048 ******** 2026-01-05 00:54:33.954107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:54:33.954112 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:54:33.954122 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:54:33.954136 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954144 | orchestrator | 2026-01-05 00:54:33.954152 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-05 00:54:33.954160 | orchestrator | Monday 05 January 2026 00:53:25 +0000 (0:00:01.102) 0:05:45.151 ******** 2026-01-05 00:54:33.954168 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954176 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954184 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954198 | orchestrator | 2026-01-05 00:54:33.954209 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-05 00:54:33.954214 | orchestrator | Monday 05 January 2026 00:53:26 +0000 (0:00:00.505) 0:05:45.656 ******** 2026-01-05 00:54:33.954219 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954224 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954229 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954234 | orchestrator | 2026-01-05 00:54:33.954239 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-05 00:54:33.954244 | orchestrator | Monday 05 January 2026 00:53:27 +0000 (0:00:01.446) 0:05:47.102 ******** 2026-01-05 00:54:33.954249 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:33.954254 | orchestrator | 2026-01-05 00:54:33.954259 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-05 00:54:33.954264 | orchestrator | Monday 05 January 2026 00:53:29 +0000 (0:00:01.934) 0:05:49.037 ******** 2026-01-05 00:54:33.954272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.954286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.954292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.954301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.954310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.954318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:33.954324 | orchestrator | 2026-01-05 00:54:33.954329 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-05 00:54:33.954334 | orchestrator | Monday 05 January 2026 00:53:35 +0000 (0:00:06.283) 0:05:55.321 ******** 2026-01-05 00:54:33.954339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.954347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.954357 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.954370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.954376 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.954386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 00:54:33.954396 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954401 | orchestrator | 2026-01-05 00:54:33.954406 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-05 00:54:33.954414 | orchestrator | Monday 05 January 2026 00:53:36 +0000 (0:00:00.671) 0:05:55.992 ******** 2026-01-05 00:54:33.954419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954440 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954469 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:54:33.954501 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954508 | orchestrator | 2026-01-05 00:54:33.954516 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-05 00:54:33.954525 | orchestrator | Monday 05 January 2026 00:53:38 +0000 (0:00:01.770) 0:05:57.763 ******** 2026-01-05 00:54:33.954532 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.954539 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.954548 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.954556 | orchestrator | 2026-01-05 00:54:33.954563 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-05 00:54:33.954571 | orchestrator | Monday 05 January 2026 00:53:39 +0000 (0:00:01.375) 0:05:59.138 ******** 2026-01-05 00:54:33.954579 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.954588 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.954614 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.954623 | orchestrator | 2026-01-05 00:54:33.954631 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-05 00:54:33.954639 | orchestrator | Monday 05 January 2026 00:53:41 +0000 (0:00:02.150) 0:06:01.289 ******** 2026-01-05 00:54:33.954646 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954653 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954660 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954668 | orchestrator | 2026-01-05 00:54:33.954677 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-05 00:54:33.954685 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:00.306) 0:06:01.596 ******** 2026-01-05 00:54:33.954694 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954699 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954704 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954709 | orchestrator | 2026-01-05 00:54:33.954715 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-05 00:54:33.954726 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:00.269) 0:06:01.865 ******** 2026-01-05 00:54:33.954731 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954736 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954741 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954746 | orchestrator | 2026-01-05 00:54:33.954751 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-05 00:54:33.954756 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:00.517) 0:06:02.383 ******** 2026-01-05 00:54:33.954761 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954766 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954771 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954776 | orchestrator | 2026-01-05 00:54:33.954781 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-05 00:54:33.954786 | orchestrator | Monday 05 January 2026 00:53:43 +0000 (0:00:00.279) 0:06:02.663 ******** 2026-01-05 00:54:33.954791 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954796 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954801 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954806 | orchestrator | 2026-01-05 00:54:33.954811 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-05 00:54:33.954816 | orchestrator | Monday 05 January 2026 00:53:43 +0000 (0:00:00.335) 0:06:02.998 ******** 2026-01-05 00:54:33.954821 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.954826 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.954831 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.954836 | orchestrator | 2026-01-05 00:54:33.954841 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-05 00:54:33.954846 | orchestrator | Monday 05 January 2026 00:53:44 +0000 (0:00:00.704) 0:06:03.703 ******** 2026-01-05 00:54:33.954856 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.954862 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.954867 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.954872 | orchestrator | 2026-01-05 00:54:33.954877 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-05 00:54:33.954882 | orchestrator | Monday 05 January 2026 00:53:44 +0000 (0:00:00.649) 0:06:04.353 ******** 2026-01-05 00:54:33.954887 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.954892 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.954896 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.954902 | orchestrator | 2026-01-05 00:54:33.954907 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-05 00:54:33.954912 | orchestrator | Monday 05 January 2026 00:53:45 +0000 (0:00:00.315) 0:06:04.668 ******** 2026-01-05 00:54:33.954917 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.954923 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.954931 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.954938 | orchestrator | 2026-01-05 00:54:33.954958 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-05 00:54:33.954972 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:00.871) 0:06:05.540 ******** 2026-01-05 00:54:33.954979 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.954986 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.954994 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.955001 | orchestrator | 2026-01-05 00:54:33.955011 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-05 00:54:33.955018 | orchestrator | Monday 05 January 2026 00:53:47 +0000 (0:00:01.189) 0:06:06.729 ******** 2026-01-05 00:54:33.955026 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.955033 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.955040 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.955047 | orchestrator | 2026-01-05 00:54:33.955055 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-05 00:54:33.955062 | orchestrator | Monday 05 January 2026 00:53:48 +0000 (0:00:00.887) 0:06:07.616 ******** 2026-01-05 00:54:33.955070 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.955077 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.955086 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.955094 | orchestrator | 2026-01-05 00:54:33.955102 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-05 00:54:33.955109 | orchestrator | Monday 05 January 2026 00:53:58 +0000 (0:00:09.986) 0:06:17.603 ******** 2026-01-05 00:54:33.955116 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.955123 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.955131 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.955138 | orchestrator | 2026-01-05 00:54:33.955145 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-05 00:54:33.955153 | orchestrator | Monday 05 January 2026 00:53:58 +0000 (0:00:00.764) 0:06:18.367 ******** 2026-01-05 00:54:33.955160 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.955169 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.955176 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.955184 | orchestrator | 2026-01-05 00:54:33.955191 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-05 00:54:33.955199 | orchestrator | Monday 05 January 2026 00:54:15 +0000 (0:00:16.607) 0:06:34.975 ******** 2026-01-05 00:54:33.955206 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.955214 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.955222 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.955230 | orchestrator | 2026-01-05 00:54:33.955238 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-05 00:54:33.955246 | orchestrator | Monday 05 January 2026 00:54:16 +0000 (0:00:00.998) 0:06:35.974 ******** 2026-01-05 00:54:33.955254 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:33.955261 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:33.955277 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:33.955286 | orchestrator | 2026-01-05 00:54:33.955294 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-05 00:54:33.955303 | orchestrator | Monday 05 January 2026 00:54:26 +0000 (0:00:09.920) 0:06:45.894 ******** 2026-01-05 00:54:33.955311 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.955318 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.955326 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.955335 | orchestrator | 2026-01-05 00:54:33.955343 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-05 00:54:33.955351 | orchestrator | Monday 05 January 2026 00:54:26 +0000 (0:00:00.320) 0:06:46.215 ******** 2026-01-05 00:54:33.955365 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.955374 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.955382 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.955389 | orchestrator | 2026-01-05 00:54:33.955397 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-05 00:54:33.955405 | orchestrator | Monday 05 January 2026 00:54:27 +0000 (0:00:00.321) 0:06:46.536 ******** 2026-01-05 00:54:33.955413 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.955422 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.955430 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.955438 | orchestrator | 2026-01-05 00:54:33.955446 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-05 00:54:33.955455 | orchestrator | Monday 05 January 2026 00:54:27 +0000 (0:00:00.549) 0:06:47.086 ******** 2026-01-05 00:54:33.955462 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.955471 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.955479 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.955488 | orchestrator | 2026-01-05 00:54:33.955497 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-05 00:54:33.955505 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:00.318) 0:06:47.405 ******** 2026-01-05 00:54:33.955513 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.955521 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.955530 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.955539 | orchestrator | 2026-01-05 00:54:33.955548 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-05 00:54:33.955555 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:00.367) 0:06:47.772 ******** 2026-01-05 00:54:33.955560 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.955565 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.955571 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.955576 | orchestrator | 2026-01-05 00:54:33.955582 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-05 00:54:33.955587 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:00.328) 0:06:48.101 ******** 2026-01-05 00:54:33.955593 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.955620 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.955626 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.955631 | orchestrator | 2026-01-05 00:54:33.955637 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-05 00:54:33.955642 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:01.214) 0:06:49.316 ******** 2026-01-05 00:54:33.955648 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:33.955653 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:33.955659 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:33.955664 | orchestrator | 2026-01-05 00:54:33.955670 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:54:33.955683 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 00:54:33.955690 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 00:54:33.955702 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 00:54:33.955708 | orchestrator | 2026-01-05 00:54:33.955713 | orchestrator | 2026-01-05 00:54:33.955719 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:54:33.955724 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:00.874) 0:06:50.190 ******** 2026-01-05 00:54:33.955730 | orchestrator | =============================================================================== 2026-01-05 00:54:33.955735 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 16.61s 2026-01-05 00:54:33.955741 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.99s 2026-01-05 00:54:33.955746 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.92s 2026-01-05 00:54:33.955751 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.40s 2026-01-05 00:54:33.955757 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.28s 2026-01-05 00:54:33.955762 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.78s 2026-01-05 00:54:33.955767 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.58s 2026-01-05 00:54:33.955773 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.89s 2026-01-05 00:54:33.955778 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.82s 2026-01-05 00:54:33.955783 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.52s 2026-01-05 00:54:33.955789 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.50s 2026-01-05 00:54:33.955794 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.50s 2026-01-05 00:54:33.955799 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.45s 2026-01-05 00:54:33.955805 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.42s 2026-01-05 00:54:33.955810 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.37s 2026-01-05 00:54:33.955816 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.36s 2026-01-05 00:54:33.955821 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.35s 2026-01-05 00:54:33.955826 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.34s 2026-01-05 00:54:33.955832 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.17s 2026-01-05 00:54:33.955842 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.17s 2026-01-05 00:54:33.955848 | orchestrator | 2026-01-05 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:36.971846 | orchestrator | 2026-01-05 00:54:36 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:54:36.975731 | orchestrator | 2026-01-05 00:54:36 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:36.976423 | orchestrator | 2026-01-05 00:54:36 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:54:36.976709 | orchestrator | 2026-01-05 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:40.018424 | orchestrator | 2026-01-05 00:54:40 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:54:40.020221 | orchestrator | 2026-01-05 00:54:40 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:40.022451 | orchestrator | 2026-01-05 00:54:40 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:54:40.022518 | orchestrator | 2026-01-05 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:43.068819 | orchestrator | 2026-01-05 00:54:43 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:54:43.071170 | orchestrator | 2026-01-05 00:54:43 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:43.072291 | orchestrator | 2026-01-05 00:54:43 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:54:43.072352 | orchestrator | 2026-01-05 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:46.110839 | orchestrator | 2026-01-05 00:54:46 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:54:46.111319 | orchestrator | 2026-01-05 00:54:46 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:46.112238 | orchestrator | 2026-01-05 00:54:46 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:54:46.112290 | orchestrator | 2026-01-05 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:49.154184 | orchestrator | 2026-01-05 00:54:49 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:54:49.154470 | orchestrator | 2026-01-05 00:54:49 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:49.155379 | orchestrator | 2026-01-05 00:54:49 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:54:49.155398 | orchestrator | 2026-01-05 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:52.189212 | orchestrator | 2026-01-05 00:54:52 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:54:52.190877 | orchestrator | 2026-01-05 00:54:52 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:52.192419 | orchestrator | 2026-01-05 00:54:52 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:54:52.192502 | orchestrator | 2026-01-05 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:55.232255 | orchestrator | 2026-01-05 00:54:55 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:54:55.233032 | orchestrator | 2026-01-05 00:54:55 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:55.234663 | orchestrator | 2026-01-05 00:54:55 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:54:55.235024 | orchestrator | 2026-01-05 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:58.275688 | orchestrator | 2026-01-05 00:54:58 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:54:58.281588 | orchestrator | 2026-01-05 00:54:58 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:54:58.282354 | orchestrator | 2026-01-05 00:54:58 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:54:58.282388 | orchestrator | 2026-01-05 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:01.315050 | orchestrator | 2026-01-05 00:55:01 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:01.319248 | orchestrator | 2026-01-05 00:55:01 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:01.319668 | orchestrator | 2026-01-05 00:55:01 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:01.320756 | orchestrator | 2026-01-05 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:04.366308 | orchestrator | 2026-01-05 00:55:04 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:04.371257 | orchestrator | 2026-01-05 00:55:04 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:04.374733 | orchestrator | 2026-01-05 00:55:04 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:04.375713 | orchestrator | 2026-01-05 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:07.423450 | orchestrator | 2026-01-05 00:55:07 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:07.425361 | orchestrator | 2026-01-05 00:55:07 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:07.428014 | orchestrator | 2026-01-05 00:55:07 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:07.428065 | orchestrator | 2026-01-05 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:10.481142 | orchestrator | 2026-01-05 00:55:10 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:10.481835 | orchestrator | 2026-01-05 00:55:10 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:10.483538 | orchestrator | 2026-01-05 00:55:10 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:10.483582 | orchestrator | 2026-01-05 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:13.538668 | orchestrator | 2026-01-05 00:55:13 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:13.540285 | orchestrator | 2026-01-05 00:55:13 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:13.541746 | orchestrator | 2026-01-05 00:55:13 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:13.542126 | orchestrator | 2026-01-05 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:16.589912 | orchestrator | 2026-01-05 00:55:16 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:16.591611 | orchestrator | 2026-01-05 00:55:16 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:16.593805 | orchestrator | 2026-01-05 00:55:16 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:16.596474 | orchestrator | 2026-01-05 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:19.641330 | orchestrator | 2026-01-05 00:55:19 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:19.643427 | orchestrator | 2026-01-05 00:55:19 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:19.645012 | orchestrator | 2026-01-05 00:55:19 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:19.645846 | orchestrator | 2026-01-05 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:22.701791 | orchestrator | 2026-01-05 00:55:22 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:22.702852 | orchestrator | 2026-01-05 00:55:22 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:22.705939 | orchestrator | 2026-01-05 00:55:22 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:22.706091 | orchestrator | 2026-01-05 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:25.746447 | orchestrator | 2026-01-05 00:55:25 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:25.747043 | orchestrator | 2026-01-05 00:55:25 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:25.747663 | orchestrator | 2026-01-05 00:55:25 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:25.747682 | orchestrator | 2026-01-05 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:28.789009 | orchestrator | 2026-01-05 00:55:28 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:28.791687 | orchestrator | 2026-01-05 00:55:28 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:28.794187 | orchestrator | 2026-01-05 00:55:28 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:28.794258 | orchestrator | 2026-01-05 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:31.833356 | orchestrator | 2026-01-05 00:55:31 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:31.833913 | orchestrator | 2026-01-05 00:55:31 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:31.837721 | orchestrator | 2026-01-05 00:55:31 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:31.837779 | orchestrator | 2026-01-05 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:34.883905 | orchestrator | 2026-01-05 00:55:34 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:34.885355 | orchestrator | 2026-01-05 00:55:34 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:34.887564 | orchestrator | 2026-01-05 00:55:34 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:34.887645 | orchestrator | 2026-01-05 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:37.931135 | orchestrator | 2026-01-05 00:55:37 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:37.933075 | orchestrator | 2026-01-05 00:55:37 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:37.935127 | orchestrator | 2026-01-05 00:55:37 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:37.935763 | orchestrator | 2026-01-05 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:40.982375 | orchestrator | 2026-01-05 00:55:40 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:40.984310 | orchestrator | 2026-01-05 00:55:40 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:40.986901 | orchestrator | 2026-01-05 00:55:40 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:40.987196 | orchestrator | 2026-01-05 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:44.057251 | orchestrator | 2026-01-05 00:55:44 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:44.059265 | orchestrator | 2026-01-05 00:55:44 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:44.062827 | orchestrator | 2026-01-05 00:55:44 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:44.062905 | orchestrator | 2026-01-05 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:47.110178 | orchestrator | 2026-01-05 00:55:47 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:47.112358 | orchestrator | 2026-01-05 00:55:47 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:47.114675 | orchestrator | 2026-01-05 00:55:47 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:47.114944 | orchestrator | 2026-01-05 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:50.168303 | orchestrator | 2026-01-05 00:55:50 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:50.170649 | orchestrator | 2026-01-05 00:55:50 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:50.172775 | orchestrator | 2026-01-05 00:55:50 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:50.172810 | orchestrator | 2026-01-05 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:53.222151 | orchestrator | 2026-01-05 00:55:53 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:53.224080 | orchestrator | 2026-01-05 00:55:53 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:53.225859 | orchestrator | 2026-01-05 00:55:53 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:53.225911 | orchestrator | 2026-01-05 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:56.275125 | orchestrator | 2026-01-05 00:55:56 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:56.275947 | orchestrator | 2026-01-05 00:55:56 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:56.278167 | orchestrator | 2026-01-05 00:55:56 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:56.278218 | orchestrator | 2026-01-05 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:59.330529 | orchestrator | 2026-01-05 00:55:59 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:55:59.332677 | orchestrator | 2026-01-05 00:55:59 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:55:59.335215 | orchestrator | 2026-01-05 00:55:59 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:55:59.335245 | orchestrator | 2026-01-05 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:02.387312 | orchestrator | 2026-01-05 00:56:02 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:02.390403 | orchestrator | 2026-01-05 00:56:02 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:02.392647 | orchestrator | 2026-01-05 00:56:02 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:02.392855 | orchestrator | 2026-01-05 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:05.437844 | orchestrator | 2026-01-05 00:56:05 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:05.440332 | orchestrator | 2026-01-05 00:56:05 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:05.442063 | orchestrator | 2026-01-05 00:56:05 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:05.442109 | orchestrator | 2026-01-05 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:08.483071 | orchestrator | 2026-01-05 00:56:08 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:08.484831 | orchestrator | 2026-01-05 00:56:08 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:08.486653 | orchestrator | 2026-01-05 00:56:08 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:08.486737 | orchestrator | 2026-01-05 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:11.531140 | orchestrator | 2026-01-05 00:56:11 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:11.533817 | orchestrator | 2026-01-05 00:56:11 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:11.536223 | orchestrator | 2026-01-05 00:56:11 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:11.536287 | orchestrator | 2026-01-05 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:14.589615 | orchestrator | 2026-01-05 00:56:14 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:14.590738 | orchestrator | 2026-01-05 00:56:14 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:14.591732 | orchestrator | 2026-01-05 00:56:14 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:14.591767 | orchestrator | 2026-01-05 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:17.640110 | orchestrator | 2026-01-05 00:56:17 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:17.641707 | orchestrator | 2026-01-05 00:56:17 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:17.643943 | orchestrator | 2026-01-05 00:56:17 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:17.644158 | orchestrator | 2026-01-05 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:20.693099 | orchestrator | 2026-01-05 00:56:20 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:20.695502 | orchestrator | 2026-01-05 00:56:20 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:20.697165 | orchestrator | 2026-01-05 00:56:20 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:20.697210 | orchestrator | 2026-01-05 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:23.739603 | orchestrator | 2026-01-05 00:56:23 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:23.740829 | orchestrator | 2026-01-05 00:56:23 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:23.742384 | orchestrator | 2026-01-05 00:56:23 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:23.742420 | orchestrator | 2026-01-05 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:26.791297 | orchestrator | 2026-01-05 00:56:26 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:26.792586 | orchestrator | 2026-01-05 00:56:26 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:26.794953 | orchestrator | 2026-01-05 00:56:26 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:26.795469 | orchestrator | 2026-01-05 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:29.837338 | orchestrator | 2026-01-05 00:56:29 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:29.839542 | orchestrator | 2026-01-05 00:56:29 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:29.843644 | orchestrator | 2026-01-05 00:56:29 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:29.843701 | orchestrator | 2026-01-05 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:32.897724 | orchestrator | 2026-01-05 00:56:32 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:32.899910 | orchestrator | 2026-01-05 00:56:32 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state STARTED 2026-01-05 00:56:32.901709 | orchestrator | 2026-01-05 00:56:32 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:32.901740 | orchestrator | 2026-01-05 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:35.954832 | orchestrator | 2026-01-05 00:56:35 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:35.961316 | orchestrator | 2026-01-05 00:56:35 | INFO  | Task 6eceda51-8e7d-454f-b1d5-cb8c3c24e6f8 is in state SUCCESS 2026-01-05 00:56:35.965340 | orchestrator | 2026-01-05 00:56:35.965481 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:56:35.965493 | orchestrator | 2.16.14 2026-01-05 00:56:35.965501 | orchestrator | 2026-01-05 00:56:35.965508 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-05 00:56:35.965515 | orchestrator | 2026-01-05 00:56:35.965521 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-05 00:56:35.965528 | orchestrator | Monday 05 January 2026 00:44:57 +0000 (0:00:00.929) 0:00:00.929 ******** 2026-01-05 00:56:35.965535 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.965543 | orchestrator | 2026-01-05 00:56:35.965548 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-05 00:56:35.965555 | orchestrator | Monday 05 January 2026 00:44:58 +0000 (0:00:01.267) 0:00:02.196 ******** 2026-01-05 00:56:35.965561 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.965567 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.965573 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.965578 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.965584 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.965590 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.965595 | orchestrator | 2026-01-05 00:56:35.965648 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-05 00:56:35.965656 | orchestrator | Monday 05 January 2026 00:45:00 +0000 (0:00:01.859) 0:00:04.056 ******** 2026-01-05 00:56:35.965662 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.965668 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.965674 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.965689 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.965702 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.965708 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.965714 | orchestrator | 2026-01-05 00:56:35.965720 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-05 00:56:35.965725 | orchestrator | Monday 05 January 2026 00:45:01 +0000 (0:00:00.851) 0:00:04.908 ******** 2026-01-05 00:56:35.965731 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.965737 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.965742 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.965748 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.965754 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.965763 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.965773 | orchestrator | 2026-01-05 00:56:35.965779 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-05 00:56:35.965785 | orchestrator | Monday 05 January 2026 00:45:02 +0000 (0:00:01.099) 0:00:06.007 ******** 2026-01-05 00:56:35.965791 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.965797 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.965802 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.965808 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.965835 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.965845 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.965854 | orchestrator | 2026-01-05 00:56:35.965863 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-05 00:56:35.965872 | orchestrator | Monday 05 January 2026 00:45:03 +0000 (0:00:00.880) 0:00:06.887 ******** 2026-01-05 00:56:35.965882 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.965890 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.965899 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.965909 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.965918 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.965926 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.965935 | orchestrator | 2026-01-05 00:56:35.965980 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-05 00:56:35.965993 | orchestrator | Monday 05 January 2026 00:45:04 +0000 (0:00:00.942) 0:00:07.830 ******** 2026-01-05 00:56:35.966004 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.966224 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.966244 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.966257 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.966265 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.966271 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.966278 | orchestrator | 2026-01-05 00:56:35.966285 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-05 00:56:35.966292 | orchestrator | Monday 05 January 2026 00:45:05 +0000 (0:00:01.146) 0:00:08.976 ******** 2026-01-05 00:56:35.966298 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.966318 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.966324 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.966330 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.966335 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.966341 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.966347 | orchestrator | 2026-01-05 00:56:35.966352 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-05 00:56:35.966358 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:00.960) 0:00:09.936 ******** 2026-01-05 00:56:35.966382 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.966389 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.966395 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.966400 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.966406 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.966412 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.966421 | orchestrator | 2026-01-05 00:56:35.966431 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-05 00:56:35.966440 | orchestrator | Monday 05 January 2026 00:45:07 +0000 (0:00:01.136) 0:00:11.073 ******** 2026-01-05 00:56:35.966449 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:56:35.966457 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:56:35.966466 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:56:35.966475 | orchestrator | 2026-01-05 00:56:35.966484 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-05 00:56:35.966493 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:00.924) 0:00:11.997 ******** 2026-01-05 00:56:35.966501 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.966510 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.966519 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.966548 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.966558 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.966567 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.966589 | orchestrator | 2026-01-05 00:56:35.966601 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-05 00:56:35.966617 | orchestrator | Monday 05 January 2026 00:45:10 +0000 (0:00:02.092) 0:00:14.090 ******** 2026-01-05 00:56:35.966639 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:56:35.966648 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:56:35.966657 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:56:35.966666 | orchestrator | 2026-01-05 00:56:35.966675 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-05 00:56:35.966684 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:03.048) 0:00:17.139 ******** 2026-01-05 00:56:35.966693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:56:35.966701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:56:35.966709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:56:35.966718 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.966728 | orchestrator | 2026-01-05 00:56:35.966736 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-05 00:56:35.966745 | orchestrator | Monday 05 January 2026 00:45:14 +0000 (0:00:00.721) 0:00:17.860 ******** 2026-01-05 00:56:35.966758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.966817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.966824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.966830 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.966836 | orchestrator | 2026-01-05 00:56:35.966842 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-05 00:56:35.966848 | orchestrator | Monday 05 January 2026 00:45:15 +0000 (0:00:00.788) 0:00:18.649 ******** 2026-01-05 00:56:35.966919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.966938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.966944 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.966951 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.966956 | orchestrator | 2026-01-05 00:56:35.966962 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-05 00:56:35.966968 | orchestrator | Monday 05 January 2026 00:45:15 +0000 (0:00:00.672) 0:00:19.322 ******** 2026-01-05 00:56:35.966987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-05 00:45:11.393751', 'end': '2026-01-05 00:45:11.692558', 'delta': '0:00:00.298807', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.967005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-05 00:45:12.513857', 'end': '2026-01-05 00:45:12.792601', 'delta': '0:00:00.278744', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.967012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-05 00:45:13.276224', 'end': '2026-01-05 00:45:13.584593', 'delta': '0:00:00.308369', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.967018 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967023 | orchestrator | 2026-01-05 00:56:35.967030 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-05 00:56:35.967035 | orchestrator | Monday 05 January 2026 00:45:16 +0000 (0:00:00.570) 0:00:19.892 ******** 2026-01-05 00:56:35.967041 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.967047 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.967053 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.967058 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.967064 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.967070 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.967075 | orchestrator | 2026-01-05 00:56:35.967081 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-05 00:56:35.967087 | orchestrator | Monday 05 January 2026 00:45:19 +0000 (0:00:02.792) 0:00:22.685 ******** 2026-01-05 00:56:35.967093 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.967098 | orchestrator | 2026-01-05 00:56:35.967283 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-05 00:56:35.967300 | orchestrator | Monday 05 January 2026 00:45:20 +0000 (0:00:01.430) 0:00:24.115 ******** 2026-01-05 00:56:35.967310 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967320 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.967330 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.967339 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.967345 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.967351 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.967356 | orchestrator | 2026-01-05 00:56:35.967386 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-05 00:56:35.967398 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:01.773) 0:00:25.889 ******** 2026-01-05 00:56:35.967418 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.967428 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967438 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.967444 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.967450 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.967456 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.967462 | orchestrator | 2026-01-05 00:56:35.967467 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 00:56:35.967473 | orchestrator | Monday 05 January 2026 00:45:24 +0000 (0:00:02.166) 0:00:28.056 ******** 2026-01-05 00:56:35.967479 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967484 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.967490 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.967495 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.967501 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.967507 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.967512 | orchestrator | 2026-01-05 00:56:35.967518 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-05 00:56:35.967523 | orchestrator | Monday 05 January 2026 00:45:26 +0000 (0:00:01.981) 0:00:30.038 ******** 2026-01-05 00:56:35.967529 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967535 | orchestrator | 2026-01-05 00:56:35.967541 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-05 00:56:35.967546 | orchestrator | Monday 05 January 2026 00:45:27 +0000 (0:00:00.303) 0:00:30.341 ******** 2026-01-05 00:56:35.967552 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967558 | orchestrator | 2026-01-05 00:56:35.967563 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 00:56:35.967569 | orchestrator | Monday 05 January 2026 00:45:27 +0000 (0:00:00.746) 0:00:31.087 ******** 2026-01-05 00:56:35.967575 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967581 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.967586 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.967599 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.967605 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.967610 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.967616 | orchestrator | 2026-01-05 00:56:35.967622 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-05 00:56:35.967628 | orchestrator | Monday 05 January 2026 00:45:28 +0000 (0:00:01.195) 0:00:32.283 ******** 2026-01-05 00:56:35.967633 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967639 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.967645 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.967650 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.967656 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.967661 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.967667 | orchestrator | 2026-01-05 00:56:35.967673 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-05 00:56:35.967679 | orchestrator | Monday 05 January 2026 00:45:29 +0000 (0:00:00.882) 0:00:33.165 ******** 2026-01-05 00:56:35.967685 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967691 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.967697 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.967702 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.967708 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.967714 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.967719 | orchestrator | 2026-01-05 00:56:35.967725 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-05 00:56:35.967731 | orchestrator | Monday 05 January 2026 00:45:30 +0000 (0:00:00.687) 0:00:33.853 ******** 2026-01-05 00:56:35.967737 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967742 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.967753 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.967759 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.967766 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.967776 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.967786 | orchestrator | 2026-01-05 00:56:35.967795 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-05 00:56:35.967805 | orchestrator | Monday 05 January 2026 00:45:31 +0000 (0:00:00.945) 0:00:34.798 ******** 2026-01-05 00:56:35.967814 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967823 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.967831 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.967840 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.967849 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.967857 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.967868 | orchestrator | 2026-01-05 00:56:35.967878 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-05 00:56:35.967887 | orchestrator | Monday 05 January 2026 00:45:32 +0000 (0:00:00.614) 0:00:35.413 ******** 2026-01-05 00:56:35.967898 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967907 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.967917 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.967925 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.967935 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.967949 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.967962 | orchestrator | 2026-01-05 00:56:35.967971 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-05 00:56:35.967980 | orchestrator | Monday 05 January 2026 00:45:33 +0000 (0:00:01.407) 0:00:36.820 ******** 2026-01-05 00:56:35.967990 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.967998 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.968070 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.968155 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.968162 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.968175 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.968188 | orchestrator | 2026-01-05 00:56:35.968202 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-05 00:56:35.968209 | orchestrator | Monday 05 January 2026 00:45:34 +0000 (0:00:00.536) 0:00:37.357 ******** 2026-01-05 00:56:35.968216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6123202--7d2d--5b15--b15a--b013203adbfc-osd--block--f6123202--7d2d--5b15--b15a--b013203adbfc', 'dm-uuid-LVM-gSvEmzN4sR9qQBYCmcrvBPRZDc8ahtdz7QNh6Z7yAClPqMCMIbCPf8VhzgZxO5zo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21-osd--block--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21', 'dm-uuid-LVM-MiZyFfsPoyjf4UhEA6dyhdxf8Nt4buWcB0XMxgbd6nRp4y3WboeXGvfpk5cHIS0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f6123202--7d2d--5b15--b15a--b013203adbfc-osd--block--f6123202--7d2d--5b15--b15a--b013203adbfc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-16yBjx-FwIA-tBBg-2Dng-Ip0w-C2XU-Haljpf', 'scsi-0QEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4', 'scsi-SQEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21-osd--block--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J9EP0q-zA7u-Zh2T-PDAd-IujH-Rp2z-NGEN8T', 'scsi-0QEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392', 'scsi-SQEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--846bb30c--958c--57a2--8682--0625433ec757-osd--block--846bb30c--958c--57a2--8682--0625433ec757', 'dm-uuid-LVM-WytFOHQK3TrfIaOFPVQ0VV2bPy4iCg1x50pstBe59FSIXJ1gqkDnGo60OOnA4yLO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0', 'scsi-SQEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be99b097--8f9c--5b18--b9e6--1dc57f49383d-osd--block--be99b097--8f9c--5b18--b9e6--1dc57f49383d', 'dm-uuid-LVM-qPfa1lYL90pRKqe9QP0OQgRUjxiwecdBx92dXsfZGMyB7zYsWbhzbfqkgaiYUwfs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part1', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part14', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part15', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part16', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--846bb30c--958c--57a2--8682--0625433ec757-osd--block--846bb30c--958c--57a2--8682--0625433ec757'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lgxM7L-2H2s-ydZZ-G3Mt-VVkw-Jptq-qugIyB', 'scsi-0QEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9', 'scsi-SQEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--be99b097--8f9c--5b18--b9e6--1dc57f49383d-osd--block--be99b097--8f9c--5b18--b9e6--1dc57f49383d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JtpNGr-twnR-Z5N1-ELuq-SfMI-3xi9-STF9tw', 'scsi-0QEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff', 'scsi-SQEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302', 'scsi-SQEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c427200--cd92--5345--a12e--93ab1a68a0a9-osd--block--8c427200--cd92--5345--a12e--93ab1a68a0a9', 'dm-uuid-LVM-yftGaJfF3fAOG2rIDGE3fDbcvFqQc3krVsVongDe66YEBcfSeoCfwGjB54VjJdci'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0a3b48c--8251--5295--95c4--04cb80bcb769-osd--block--f0a3b48c--8251--5295--95c4--04cb80bcb769', 'dm-uuid-LVM-Qfmqg5JUUSt7eCfNBoqOJHNYrALv8lFXgkFwVgtPuBbxsgTPXNNDi25IhISi2UCn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968691 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.968704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part1', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part14', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part15', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part16', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.968976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.968989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part1', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part14', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part15', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part16', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.969056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8c427200--cd92--5345--a12e--93ab1a68a0a9-osd--block--8c427200--cd92--5345--a12e--93ab1a68a0a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NHu62d-t11c-UK62-E30C-U5Oe-QyNU-2jm3BJ', 'scsi-0QEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52', 'scsi-SQEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.969067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f0a3b48c--8251--5295--95c4--04cb80bcb769-osd--block--f0a3b48c--8251--5295--95c4--04cb80bcb769'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZlCbJ8-1XNk-wRmZ-rsfx-5dxN-dsVr-H6mV0e', 'scsi-0QEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3', 'scsi-SQEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.969083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c', 'scsi-SQEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.969100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.969108 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.969117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969163 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.969180 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.969187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part1', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part14', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part15', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part16', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.969244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.969268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969312 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.969318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:56:35.969340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part1', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part14', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part15', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part16', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.969356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:56:35.969418 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.969425 | orchestrator | 2026-01-05 00:56:35.969431 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-05 00:56:35.969438 | orchestrator | Monday 05 January 2026 00:45:35 +0000 (0:00:01.469) 0:00:38.826 ******** 2026-01-05 00:56:35.969454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6123202--7d2d--5b15--b15a--b013203adbfc-osd--block--f6123202--7d2d--5b15--b15a--b013203adbfc', 'dm-uuid-LVM-gSvEmzN4sR9qQBYCmcrvBPRZDc8ahtdz7QNh6Z7yAClPqMCMIbCPf8VhzgZxO5zo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.969462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21-osd--block--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21', 'dm-uuid-LVM-MiZyFfsPoyjf4UhEA6dyhdxf8Nt4buWcB0XMxgbd6nRp4y3WboeXGvfpk5cHIS0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.969485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.969492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.969610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970844 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f6123202--7d2d--5b15--b15a--b013203adbfc-osd--block--f6123202--7d2d--5b15--b15a--b013203adbfc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-16yBjx-FwIA-tBBg-2Dng-Ip0w-C2XU-Haljpf', 'scsi-0QEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4', 'scsi-SQEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21-osd--block--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J9EP0q-zA7u-Zh2T-PDAd-IujH-Rp2z-NGEN8T', 'scsi-0QEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392', 'scsi-SQEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970877 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0', 'scsi-SQEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970901 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--846bb30c--958c--57a2--8682--0625433ec757-osd--block--846bb30c--958c--57a2--8682--0625433ec757', 'dm-uuid-LVM-WytFOHQK3TrfIaOFPVQ0VV2bPy4iCg1x50pstBe59FSIXJ1gqkDnGo60OOnA4yLO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be99b097--8f9c--5b18--b9e6--1dc57f49383d-osd--block--be99b097--8f9c--5b18--b9e6--1dc57f49383d', 'dm-uuid-LVM-qPfa1lYL90pRKqe9QP0OQgRUjxiwecdBx92dXsfZGMyB7zYsWbhzbfqkgaiYUwfs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970949 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.970956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c427200--cd92--5345--a12e--93ab1a68a0a9-osd--block--8c427200--cd92--5345--a12e--93ab1a68a0a9', 'dm-uuid-LVM-yftGaJfF3fAOG2rIDGE3fDbcvFqQc3krVsVongDe66YEBcfSeoCfwGjB54VjJdci'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970973 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0a3b48c--8251--5295--95c4--04cb80bcb769-osd--block--f0a3b48c--8251--5295--95c4--04cb80bcb769', 'dm-uuid-LVM-Qfmqg5JUUSt7eCfNBoqOJHNYrALv8lFXgkFwVgtPuBbxsgTPXNNDi25IhISi2UCn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.970996 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971027 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971043 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971053 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971070 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971080 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971110 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971126 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971136 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971147 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971174 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971205 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971211 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971221 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971237 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971243 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971253 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971259 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971265 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971275 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971287 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part1', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part14', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part15', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part16', 'scsi-SQEMU_QEMU_HARDDISK_27d06f6a-b839-4b4f-97f0-cacfc59b2589-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971304 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part1', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part14', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part15', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part16', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971317 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part1', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part14', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part15', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part16', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971355 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971451 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8c427200--cd92--5345--a12e--93ab1a68a0a9-osd--block--8c427200--cd92--5345--a12e--93ab1a68a0a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NHu62d-t11c-UK62-E30C-U5Oe-QyNU-2jm3BJ', 'scsi-0QEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52', 'scsi-SQEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971466 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--846bb30c--958c--57a2--8682--0625433ec757-osd--block--846bb30c--958c--57a2--8682--0625433ec757'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lgxM7L-2H2s-ydZZ-G3Mt-VVkw-Jptq-qugIyB', 'scsi-0QEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9', 'scsi-SQEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f0a3b48c--8251--5295--95c4--04cb80bcb769-osd--block--f0a3b48c--8251--5295--95c4--04cb80bcb769'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZlCbJ8-1XNk-wRmZ-rsfx-5dxN-dsVr-H6mV0e', 'scsi-0QEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3', 'scsi-SQEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971501 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--be99b097--8f9c--5b18--b9e6--1dc57f49383d-osd--block--be99b097--8f9c--5b18--b9e6--1dc57f49383d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JtpNGr-twnR-Z5N1-ELuq-SfMI-3xi9-STF9tw', 'scsi-0QEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff', 'scsi-SQEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971521 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302', 'scsi-SQEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971532 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971540 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971552 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c', 'scsi-SQEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971575 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.971582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971590 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.971602 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part1', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part14', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part15', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part16', 'scsi-SQEMU_QEMU_HARDDISK_47b5dff2-66dd-4733-9974-3b39262202ed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971610 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.971624 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971641 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.971650 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971658 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971665 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971678 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971685 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971691 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971720 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971727 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971737 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part1', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part14', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part15', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part16', 'scsi-SQEMU_QEMU_HARDDISK_afb8d460-827e-407a-9ee0-f351bfc1cb1b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971744 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:56:35.971754 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.971760 | orchestrator | 2026-01-05 00:56:35.971770 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-05 00:56:35.971776 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:01.726) 0:00:40.552 ******** 2026-01-05 00:56:35.971782 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.971788 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.971794 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.971800 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.971806 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.971812 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.971817 | orchestrator | 2026-01-05 00:56:35.971823 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-05 00:56:35.971829 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:01.589) 0:00:42.141 ******** 2026-01-05 00:56:35.971835 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.971841 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.971846 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.971852 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.971857 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.971863 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.971868 | orchestrator | 2026-01-05 00:56:35.971874 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 00:56:35.971880 | orchestrator | Monday 05 January 2026 00:45:39 +0000 (0:00:00.715) 0:00:42.857 ******** 2026-01-05 00:56:35.971885 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.971891 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.971897 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.971903 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.971908 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.971914 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.971920 | orchestrator | 2026-01-05 00:56:35.971925 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 00:56:35.971931 | orchestrator | Monday 05 January 2026 00:45:40 +0000 (0:00:00.973) 0:00:43.831 ******** 2026-01-05 00:56:35.971937 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.971942 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.971948 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.971954 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.971959 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.971965 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.971970 | orchestrator | 2026-01-05 00:56:35.971976 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 00:56:35.971982 | orchestrator | Monday 05 January 2026 00:45:41 +0000 (0:00:01.413) 0:00:45.244 ******** 2026-01-05 00:56:35.971988 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.971993 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.971999 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.972005 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.972010 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.972016 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.972021 | orchestrator | 2026-01-05 00:56:35.972027 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 00:56:35.972033 | orchestrator | Monday 05 January 2026 00:45:43 +0000 (0:00:01.586) 0:00:46.830 ******** 2026-01-05 00:56:35.972039 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.972044 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.972050 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.972069 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.972092 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.972101 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.972107 | orchestrator | 2026-01-05 00:56:35.972112 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-05 00:56:35.972122 | orchestrator | Monday 05 January 2026 00:45:44 +0000 (0:00:01.495) 0:00:48.326 ******** 2026-01-05 00:56:35.972128 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-05 00:56:35.972134 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-05 00:56:35.972140 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-05 00:56:35.972145 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-05 00:56:35.972151 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-05 00:56:35.972157 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-05 00:56:35.972163 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-05 00:56:35.972168 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-05 00:56:35.972174 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-05 00:56:35.972180 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:56:35.972185 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-05 00:56:35.972191 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-05 00:56:35.972197 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-05 00:56:35.972202 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-05 00:56:35.972208 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 00:56:35.972214 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-05 00:56:35.972219 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-05 00:56:35.972225 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 00:56:35.972231 | orchestrator | 2026-01-05 00:56:35.972237 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-05 00:56:35.972243 | orchestrator | Monday 05 January 2026 00:45:51 +0000 (0:00:06.021) 0:00:54.347 ******** 2026-01-05 00:56:35.972248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:56:35.972254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:56:35.972260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:56:35.972266 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 00:56:35.972271 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 00:56:35.972277 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 00:56:35.972283 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.972289 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 00:56:35.972298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 00:56:35.972304 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 00:56:35.972310 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.972315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:56:35.972321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:56:35.972327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:56:35.972332 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.972338 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.972344 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-05 00:56:35.972349 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-05 00:56:35.972355 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-05 00:56:35.972361 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-05 00:56:35.972388 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.972399 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-05 00:56:35.972404 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-05 00:56:35.972410 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.972416 | orchestrator | 2026-01-05 00:56:35.972421 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-05 00:56:35.972427 | orchestrator | Monday 05 January 2026 00:45:52 +0000 (0:00:01.007) 0:00:55.355 ******** 2026-01-05 00:56:35.972433 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.972438 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.972444 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.972450 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.972456 | orchestrator | 2026-01-05 00:56:35.972462 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 00:56:35.972469 | orchestrator | Monday 05 January 2026 00:45:53 +0000 (0:00:01.488) 0:00:56.844 ******** 2026-01-05 00:56:35.972474 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.972480 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.972486 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.972491 | orchestrator | 2026-01-05 00:56:35.972497 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 00:56:35.972503 | orchestrator | Monday 05 January 2026 00:45:54 +0000 (0:00:00.654) 0:00:57.498 ******** 2026-01-05 00:56:35.972508 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.972514 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.972520 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.972525 | orchestrator | 2026-01-05 00:56:35.972531 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 00:56:35.972536 | orchestrator | Monday 05 January 2026 00:45:54 +0000 (0:00:00.706) 0:00:58.205 ******** 2026-01-05 00:56:35.972542 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.972548 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.972553 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.972559 | orchestrator | 2026-01-05 00:56:35.972565 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 00:56:35.972570 | orchestrator | Monday 05 January 2026 00:45:55 +0000 (0:00:00.786) 0:00:58.991 ******** 2026-01-05 00:56:35.972579 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.972585 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.972591 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.972597 | orchestrator | 2026-01-05 00:56:35.972603 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 00:56:35.972608 | orchestrator | Monday 05 January 2026 00:45:56 +0000 (0:00:01.306) 0:01:00.298 ******** 2026-01-05 00:56:35.972614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.972620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.972625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.972631 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.972637 | orchestrator | 2026-01-05 00:56:35.972642 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 00:56:35.972648 | orchestrator | Monday 05 January 2026 00:45:57 +0000 (0:00:00.819) 0:01:01.117 ******** 2026-01-05 00:56:35.972654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.972659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.972665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.972670 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.972676 | orchestrator | 2026-01-05 00:56:35.972682 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 00:56:35.972687 | orchestrator | Monday 05 January 2026 00:45:58 +0000 (0:00:00.449) 0:01:01.566 ******** 2026-01-05 00:56:35.972698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.972703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.972709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.972715 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.972720 | orchestrator | 2026-01-05 00:56:35.972726 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 00:56:35.972732 | orchestrator | Monday 05 January 2026 00:45:58 +0000 (0:00:00.556) 0:01:02.123 ******** 2026-01-05 00:56:35.972737 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.972743 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.972749 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.972755 | orchestrator | 2026-01-05 00:56:35.972760 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 00:56:35.972766 | orchestrator | Monday 05 January 2026 00:45:59 +0000 (0:00:00.478) 0:01:02.601 ******** 2026-01-05 00:56:35.972772 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 00:56:35.972778 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 00:56:35.972955 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 00:56:35.972965 | orchestrator | 2026-01-05 00:56:35.972971 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-05 00:56:35.972977 | orchestrator | Monday 05 January 2026 00:46:00 +0000 (0:00:01.179) 0:01:03.781 ******** 2026-01-05 00:56:35.972983 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:56:35.972989 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:56:35.972995 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:56:35.973001 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 00:56:35.973007 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 00:56:35.973025 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 00:56:35.973032 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 00:56:35.973037 | orchestrator | 2026-01-05 00:56:35.973043 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-05 00:56:35.973056 | orchestrator | Monday 05 January 2026 00:46:01 +0000 (0:00:00.878) 0:01:04.659 ******** 2026-01-05 00:56:35.973062 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:56:35.973068 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:56:35.973074 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:56:35.973079 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 00:56:35.973085 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 00:56:35.973091 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 00:56:35.973096 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 00:56:35.973102 | orchestrator | 2026-01-05 00:56:35.973108 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:56:35.973113 | orchestrator | Monday 05 January 2026 00:46:03 +0000 (0:00:02.400) 0:01:07.060 ******** 2026-01-05 00:56:35.973119 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.973126 | orchestrator | 2026-01-05 00:56:35.973132 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:56:35.973138 | orchestrator | Monday 05 January 2026 00:46:04 +0000 (0:00:01.203) 0:01:08.263 ******** 2026-01-05 00:56:35.973151 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.973157 | orchestrator | 2026-01-05 00:56:35.973168 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:56:35.973174 | orchestrator | Monday 05 January 2026 00:46:06 +0000 (0:00:01.201) 0:01:09.465 ******** 2026-01-05 00:56:35.973179 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.973185 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.973191 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.973196 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.973202 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.973208 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.973214 | orchestrator | 2026-01-05 00:56:35.973219 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:56:35.973225 | orchestrator | Monday 05 January 2026 00:46:07 +0000 (0:00:01.342) 0:01:10.807 ******** 2026-01-05 00:56:35.973231 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.973236 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.973242 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.973248 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.973253 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.973259 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.973265 | orchestrator | 2026-01-05 00:56:35.973271 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:56:35.973276 | orchestrator | Monday 05 January 2026 00:46:08 +0000 (0:00:00.976) 0:01:11.784 ******** 2026-01-05 00:56:35.973282 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.973288 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.973293 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.973299 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.973305 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.973310 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.973316 | orchestrator | 2026-01-05 00:56:35.973322 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:56:35.973328 | orchestrator | Monday 05 January 2026 00:46:09 +0000 (0:00:00.854) 0:01:12.638 ******** 2026-01-05 00:56:35.973333 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.973339 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.973345 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.973350 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.973356 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.973361 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.973424 | orchestrator | 2026-01-05 00:56:35.973434 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:56:35.973444 | orchestrator | Monday 05 January 2026 00:46:10 +0000 (0:00:00.776) 0:01:13.415 ******** 2026-01-05 00:56:35.973453 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.973467 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.973478 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.973487 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.973496 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.973536 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.973547 | orchestrator | 2026-01-05 00:56:35.973556 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:56:35.973564 | orchestrator | Monday 05 January 2026 00:46:11 +0000 (0:00:01.442) 0:01:14.857 ******** 2026-01-05 00:56:35.973574 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.973583 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.973594 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.973604 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.973614 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.973625 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.973632 | orchestrator | 2026-01-05 00:56:35.973645 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:56:35.973652 | orchestrator | Monday 05 January 2026 00:46:12 +0000 (0:00:00.685) 0:01:15.543 ******** 2026-01-05 00:56:35.973659 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.973666 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.973673 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.973680 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.973687 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.973693 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.973700 | orchestrator | 2026-01-05 00:56:35.973707 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:56:35.973714 | orchestrator | Monday 05 January 2026 00:46:12 +0000 (0:00:00.761) 0:01:16.305 ******** 2026-01-05 00:56:35.973721 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.973728 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.973734 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.973741 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.973747 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.973754 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.973761 | orchestrator | 2026-01-05 00:56:35.973767 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:56:35.973774 | orchestrator | Monday 05 January 2026 00:46:14 +0000 (0:00:01.059) 0:01:17.365 ******** 2026-01-05 00:56:35.973782 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.973788 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.973795 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.973802 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.973808 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.973815 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.973822 | orchestrator | 2026-01-05 00:56:35.973829 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:56:35.973836 | orchestrator | Monday 05 January 2026 00:46:15 +0000 (0:00:01.308) 0:01:18.674 ******** 2026-01-05 00:56:35.973843 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.973849 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.973855 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.973860 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.973866 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.973872 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.973877 | orchestrator | 2026-01-05 00:56:35.973883 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:56:35.973889 | orchestrator | Monday 05 January 2026 00:46:16 +0000 (0:00:00.754) 0:01:19.428 ******** 2026-01-05 00:56:35.973894 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.973900 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.973906 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.973912 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.973918 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.973923 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.973929 | orchestrator | 2026-01-05 00:56:35.973940 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:56:35.973946 | orchestrator | Monday 05 January 2026 00:46:16 +0000 (0:00:00.889) 0:01:20.318 ******** 2026-01-05 00:56:35.973951 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.973957 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.973964 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.973969 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.973975 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.973981 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.973986 | orchestrator | 2026-01-05 00:56:35.973992 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:56:35.973998 | orchestrator | Monday 05 January 2026 00:46:17 +0000 (0:00:00.617) 0:01:20.935 ******** 2026-01-05 00:56:35.974003 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.974043 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.974051 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.974056 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.974062 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.974068 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.974074 | orchestrator | 2026-01-05 00:56:35.974080 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:56:35.974085 | orchestrator | Monday 05 January 2026 00:46:18 +0000 (0:00:01.004) 0:01:21.940 ******** 2026-01-05 00:56:35.974091 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.974097 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.974103 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.974108 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.974114 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.974120 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.974125 | orchestrator | 2026-01-05 00:56:35.974131 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:56:35.974137 | orchestrator | Monday 05 January 2026 00:46:19 +0000 (0:00:00.770) 0:01:22.710 ******** 2026-01-05 00:56:35.974143 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.974148 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.974154 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.974160 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.974165 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.974171 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.974177 | orchestrator | 2026-01-05 00:56:35.974183 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:56:35.974189 | orchestrator | Monday 05 January 2026 00:46:20 +0000 (0:00:00.995) 0:01:23.706 ******** 2026-01-05 00:56:35.974195 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.974200 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.974206 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.974212 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.974241 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.974247 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.974253 | orchestrator | 2026-01-05 00:56:35.974259 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:56:35.974264 | orchestrator | Monday 05 January 2026 00:46:21 +0000 (0:00:01.574) 0:01:25.281 ******** 2026-01-05 00:56:35.974271 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.974276 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.974282 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.974288 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.974294 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.974299 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.974305 | orchestrator | 2026-01-05 00:56:35.974311 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:56:35.974316 | orchestrator | Monday 05 January 2026 00:46:22 +0000 (0:00:00.860) 0:01:26.141 ******** 2026-01-05 00:56:35.974322 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.974328 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.974334 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.974339 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.974345 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.974350 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.974356 | orchestrator | 2026-01-05 00:56:35.974380 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:56:35.974388 | orchestrator | Monday 05 January 2026 00:46:23 +0000 (0:00:00.690) 0:01:26.832 ******** 2026-01-05 00:56:35.974393 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.974399 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.974405 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.974410 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.974419 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.974429 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.974444 | orchestrator | 2026-01-05 00:56:35.974453 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-05 00:56:35.974462 | orchestrator | Monday 05 January 2026 00:46:24 +0000 (0:00:01.198) 0:01:28.030 ******** 2026-01-05 00:56:35.974472 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.974481 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.974491 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.974501 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.974511 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.974521 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.974530 | orchestrator | 2026-01-05 00:56:35.974539 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-05 00:56:35.974547 | orchestrator | Monday 05 January 2026 00:46:26 +0000 (0:00:01.484) 0:01:29.515 ******** 2026-01-05 00:56:35.974557 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.974566 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.974574 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.974580 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.974586 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.974592 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.974598 | orchestrator | 2026-01-05 00:56:35.974604 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-05 00:56:35.974609 | orchestrator | Monday 05 January 2026 00:46:28 +0000 (0:00:02.143) 0:01:31.658 ******** 2026-01-05 00:56:35.974615 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.974622 | orchestrator | 2026-01-05 00:56:35.974636 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-05 00:56:35.974642 | orchestrator | Monday 05 January 2026 00:46:29 +0000 (0:00:01.124) 0:01:32.783 ******** 2026-01-05 00:56:35.974647 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.974653 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.974659 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.974664 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.974670 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.974676 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.974681 | orchestrator | 2026-01-05 00:56:35.974687 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-05 00:56:35.974693 | orchestrator | Monday 05 January 2026 00:46:30 +0000 (0:00:00.606) 0:01:33.389 ******** 2026-01-05 00:56:35.974698 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.974704 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.974710 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.974715 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.974721 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.974726 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.974732 | orchestrator | 2026-01-05 00:56:35.974738 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-05 00:56:35.974744 | orchestrator | Monday 05 January 2026 00:46:30 +0000 (0:00:00.867) 0:01:34.256 ******** 2026-01-05 00:56:35.974749 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:56:35.974755 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:56:35.974761 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:56:35.974766 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:56:35.974772 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:56:35.974778 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 00:56:35.974784 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:56:35.974798 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:56:35.974804 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:56:35.974810 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:56:35.974840 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:56:35.974847 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 00:56:35.974853 | orchestrator | 2026-01-05 00:56:35.974858 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-05 00:56:35.974864 | orchestrator | Monday 05 January 2026 00:46:32 +0000 (0:00:01.395) 0:01:35.651 ******** 2026-01-05 00:56:35.974870 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.974876 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.974881 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.974887 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.974893 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.974899 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.974904 | orchestrator | 2026-01-05 00:56:35.974910 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-05 00:56:35.974916 | orchestrator | Monday 05 January 2026 00:46:33 +0000 (0:00:01.136) 0:01:36.788 ******** 2026-01-05 00:56:35.974922 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.974927 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.974933 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.974939 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.974945 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.974950 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.974956 | orchestrator | 2026-01-05 00:56:35.974962 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-05 00:56:35.974968 | orchestrator | Monday 05 January 2026 00:46:34 +0000 (0:00:00.592) 0:01:37.380 ******** 2026-01-05 00:56:35.974973 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.974979 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.974985 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.974991 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.974997 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975002 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975008 | orchestrator | 2026-01-05 00:56:35.975014 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-05 00:56:35.975020 | orchestrator | Monday 05 January 2026 00:46:34 +0000 (0:00:00.763) 0:01:38.144 ******** 2026-01-05 00:56:35.975026 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975031 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975037 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975043 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975049 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975054 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975060 | orchestrator | 2026-01-05 00:56:35.975066 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-05 00:56:35.975072 | orchestrator | Monday 05 January 2026 00:46:35 +0000 (0:00:00.625) 0:01:38.770 ******** 2026-01-05 00:56:35.975078 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.975084 | orchestrator | 2026-01-05 00:56:35.975090 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-05 00:56:35.975096 | orchestrator | Monday 05 January 2026 00:46:36 +0000 (0:00:01.233) 0:01:40.003 ******** 2026-01-05 00:56:35.975102 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.975108 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.975117 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.975128 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.975135 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.975140 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.975146 | orchestrator | 2026-01-05 00:56:35.975152 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-05 00:56:35.975158 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:01:05.089) 0:02:45.093 ******** 2026-01-05 00:56:35.975163 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:56:35.975169 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:56:35.975175 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:56:35.975181 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975187 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:56:35.975192 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:56:35.975198 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:56:35.975204 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975209 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:56:35.975215 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:56:35.975221 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:56:35.975226 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975233 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:56:35.975239 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:56:35.975244 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:56:35.975250 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975255 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:56:35.975261 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:56:35.975267 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:56:35.975273 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975295 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 00:56:35.975302 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 00:56:35.975308 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 00:56:35.975314 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975320 | orchestrator | 2026-01-05 00:56:35.975325 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-05 00:56:35.975331 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:00.710) 0:02:45.803 ******** 2026-01-05 00:56:35.975337 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975342 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975348 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975354 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975360 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975412 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975419 | orchestrator | 2026-01-05 00:56:35.975424 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-05 00:56:35.975430 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.742) 0:02:46.546 ******** 2026-01-05 00:56:35.975436 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975442 | orchestrator | 2026-01-05 00:56:35.975448 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-05 00:56:35.975453 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.126) 0:02:46.673 ******** 2026-01-05 00:56:35.975464 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975470 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975476 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975482 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975487 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975493 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975498 | orchestrator | 2026-01-05 00:56:35.975504 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-05 00:56:35.975510 | orchestrator | Monday 05 January 2026 00:47:44 +0000 (0:00:00.709) 0:02:47.382 ******** 2026-01-05 00:56:35.975516 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975521 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975527 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975532 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975538 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975544 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975549 | orchestrator | 2026-01-05 00:56:35.975555 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-05 00:56:35.975561 | orchestrator | Monday 05 January 2026 00:47:45 +0000 (0:00:01.026) 0:02:48.409 ******** 2026-01-05 00:56:35.975567 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975572 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975578 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975584 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975589 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975595 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975601 | orchestrator | 2026-01-05 00:56:35.975607 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-05 00:56:35.975612 | orchestrator | Monday 05 January 2026 00:47:45 +0000 (0:00:00.694) 0:02:49.104 ******** 2026-01-05 00:56:35.975618 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.975624 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.975629 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.975635 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.975641 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.975667 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.975673 | orchestrator | 2026-01-05 00:56:35.975679 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-05 00:56:35.975685 | orchestrator | Monday 05 January 2026 00:47:49 +0000 (0:00:03.640) 0:02:52.745 ******** 2026-01-05 00:56:35.975691 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.975696 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.975702 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.975708 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.975713 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.975719 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.975725 | orchestrator | 2026-01-05 00:56:35.975730 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-05 00:56:35.975736 | orchestrator | Monday 05 January 2026 00:47:50 +0000 (0:00:00.739) 0:02:53.485 ******** 2026-01-05 00:56:35.975742 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.975750 | orchestrator | 2026-01-05 00:56:35.975755 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-05 00:56:35.975761 | orchestrator | Monday 05 January 2026 00:47:51 +0000 (0:00:01.611) 0:02:55.096 ******** 2026-01-05 00:56:35.975767 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975773 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975778 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975784 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975790 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975795 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975801 | orchestrator | 2026-01-05 00:56:35.975812 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-05 00:56:35.975817 | orchestrator | Monday 05 January 2026 00:47:52 +0000 (0:00:00.912) 0:02:56.009 ******** 2026-01-05 00:56:35.975823 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975829 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975835 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975840 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975846 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975852 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975857 | orchestrator | 2026-01-05 00:56:35.975863 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-05 00:56:35.975869 | orchestrator | Monday 05 January 2026 00:47:53 +0000 (0:00:01.102) 0:02:57.111 ******** 2026-01-05 00:56:35.975875 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975880 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975907 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975913 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975919 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975925 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975930 | orchestrator | 2026-01-05 00:56:35.975936 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-05 00:56:35.975942 | orchestrator | Monday 05 January 2026 00:47:54 +0000 (0:00:01.211) 0:02:58.323 ******** 2026-01-05 00:56:35.975947 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.975953 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.975959 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.975964 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.975970 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.975975 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.975981 | orchestrator | 2026-01-05 00:56:35.975987 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-05 00:56:35.975993 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:00.761) 0:02:59.084 ******** 2026-01-05 00:56:35.975998 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.976004 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.976009 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.976015 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.976021 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.976027 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.976032 | orchestrator | 2026-01-05 00:56:35.976038 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-05 00:56:35.976044 | orchestrator | Monday 05 January 2026 00:47:57 +0000 (0:00:01.315) 0:03:00.400 ******** 2026-01-05 00:56:35.976053 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.976062 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.976068 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.976073 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.976079 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.976085 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.976091 | orchestrator | 2026-01-05 00:56:35.976096 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-05 00:56:35.976103 | orchestrator | Monday 05 January 2026 00:47:58 +0000 (0:00:01.010) 0:03:01.411 ******** 2026-01-05 00:56:35.976112 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.976122 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.976130 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.976140 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.976149 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.976158 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.976168 | orchestrator | 2026-01-05 00:56:35.976177 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-05 00:56:35.976187 | orchestrator | Monday 05 January 2026 00:47:59 +0000 (0:00:01.163) 0:03:02.575 ******** 2026-01-05 00:56:35.976198 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.976203 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.976209 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.976215 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.976220 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.976226 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.976232 | orchestrator | 2026-01-05 00:56:35.976237 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-05 00:56:35.976243 | orchestrator | Monday 05 January 2026 00:48:00 +0000 (0:00:00.942) 0:03:03.518 ******** 2026-01-05 00:56:35.976249 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.976255 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.976268 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.976277 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.976286 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.976296 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.976306 | orchestrator | 2026-01-05 00:56:35.976315 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-05 00:56:35.976324 | orchestrator | Monday 05 January 2026 00:48:02 +0000 (0:00:01.965) 0:03:05.484 ******** 2026-01-05 00:56:35.976334 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.976344 | orchestrator | 2026-01-05 00:56:35.976354 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-05 00:56:35.976382 | orchestrator | Monday 05 January 2026 00:48:03 +0000 (0:00:01.569) 0:03:07.054 ******** 2026-01-05 00:56:35.976393 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-05 00:56:35.976403 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-05 00:56:35.976412 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-05 00:56:35.976421 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-05 00:56:35.976431 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-05 00:56:35.976441 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-05 00:56:35.976450 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-05 00:56:35.976461 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-05 00:56:35.976471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-05 00:56:35.976480 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-05 00:56:35.976490 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-05 00:56:35.976498 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-05 00:56:35.976504 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-05 00:56:35.976509 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-05 00:56:35.976515 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-05 00:56:35.976521 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-05 00:56:35.976527 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-05 00:56:35.976533 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-05 00:56:35.976568 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-05 00:56:35.976580 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-05 00:56:35.976591 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-05 00:56:35.976602 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-05 00:56:35.976614 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-05 00:56:35.976625 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-05 00:56:35.976636 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-05 00:56:35.976645 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-05 00:56:35.976676 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-05 00:56:35.976682 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-05 00:56:35.976688 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-05 00:56:35.976693 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-05 00:56:35.976699 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-05 00:56:35.976706 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-05 00:56:35.976712 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-05 00:56:35.976720 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-05 00:56:35.976731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-05 00:56:35.976742 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-05 00:56:35.976752 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-05 00:56:35.976763 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-05 00:56:35.976770 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-05 00:56:35.976776 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-05 00:56:35.976782 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:56:35.976789 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:56:35.976795 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-05 00:56:35.976801 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:56:35.976807 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-05 00:56:35.976813 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:56:35.976819 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:56:35.976825 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:56:35.976832 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:56:35.976838 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:56:35.976844 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-05 00:56:35.976850 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:56:35.976856 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:56:35.976868 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:56:35.976875 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:56:35.976884 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:56:35.976894 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 00:56:35.976904 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:56:35.976914 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:56:35.976923 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:56:35.976934 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:56:35.976943 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:56:35.976954 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 00:56:35.976965 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:56:35.976975 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:56:35.976985 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:56:35.976994 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:56:35.977005 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:56:35.977012 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 00:56:35.977018 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:56:35.977024 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:56:35.977030 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:56:35.977037 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:56:35.977043 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:56:35.977049 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 00:56:35.977056 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:56:35.977091 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:56:35.977098 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:56:35.977104 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:56:35.977110 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:56:35.977117 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 00:56:35.977123 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-05 00:56:35.977129 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-05 00:56:35.977135 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:56:35.977141 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-05 00:56:35.977147 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:56:35.977154 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 00:56:35.977160 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-05 00:56:35.977166 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-05 00:56:35.977172 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-05 00:56:35.977178 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-05 00:56:35.977185 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-05 00:56:35.977191 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-05 00:56:35.977197 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-05 00:56:35.977203 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-05 00:56:35.977209 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-05 00:56:35.977215 | orchestrator | 2026-01-05 00:56:35.977222 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-05 00:56:35.977233 | orchestrator | Monday 05 January 2026 00:48:10 +0000 (0:00:07.120) 0:03:14.174 ******** 2026-01-05 00:56:35.977244 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.977254 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.977265 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.977275 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.977287 | orchestrator | 2026-01-05 00:56:35.977297 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-05 00:56:35.977307 | orchestrator | Monday 05 January 2026 00:48:11 +0000 (0:00:01.031) 0:03:15.205 ******** 2026-01-05 00:56:35.977316 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.977327 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.977345 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.977356 | orchestrator | 2026-01-05 00:56:35.977392 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-05 00:56:35.977404 | orchestrator | Monday 05 January 2026 00:48:13 +0000 (0:00:01.144) 0:03:16.350 ******** 2026-01-05 00:56:35.977413 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.977420 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.977426 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.977433 | orchestrator | 2026-01-05 00:56:35.977439 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-05 00:56:35.977445 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:01.268) 0:03:17.619 ******** 2026-01-05 00:56:35.977451 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.977457 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.977464 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.977470 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.977476 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.977482 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.977488 | orchestrator | 2026-01-05 00:56:35.977495 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-05 00:56:35.977501 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:00.684) 0:03:18.303 ******** 2026-01-05 00:56:35.977507 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.977513 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.977519 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.977525 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.977532 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.977538 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.977544 | orchestrator | 2026-01-05 00:56:35.977550 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-05 00:56:35.977556 | orchestrator | Monday 05 January 2026 00:48:15 +0000 (0:00:00.930) 0:03:19.234 ******** 2026-01-05 00:56:35.977563 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.977569 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.977575 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.977581 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.977587 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.977593 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.977599 | orchestrator | 2026-01-05 00:56:35.977634 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-05 00:56:35.977642 | orchestrator | Monday 05 January 2026 00:48:16 +0000 (0:00:00.858) 0:03:20.093 ******** 2026-01-05 00:56:35.977648 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.977654 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.977660 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.977667 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.977672 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.977679 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.977685 | orchestrator | 2026-01-05 00:56:35.977691 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-05 00:56:35.977697 | orchestrator | Monday 05 January 2026 00:48:17 +0000 (0:00:00.955) 0:03:21.048 ******** 2026-01-05 00:56:35.977704 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.977710 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.977716 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.977722 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.977729 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.977742 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.977748 | orchestrator | 2026-01-05 00:56:35.977755 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-05 00:56:35.977761 | orchestrator | Monday 05 January 2026 00:48:18 +0000 (0:00:00.780) 0:03:21.829 ******** 2026-01-05 00:56:35.977767 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.977773 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.977780 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.977789 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.977799 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.977810 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.977820 | orchestrator | 2026-01-05 00:56:35.977830 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-05 00:56:35.977840 | orchestrator | Monday 05 January 2026 00:48:19 +0000 (0:00:00.889) 0:03:22.718 ******** 2026-01-05 00:56:35.977848 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.977858 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.977867 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.977878 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.977887 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.977896 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.977906 | orchestrator | 2026-01-05 00:56:35.977917 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-05 00:56:35.977926 | orchestrator | Monday 05 January 2026 00:48:20 +0000 (0:00:00.958) 0:03:23.677 ******** 2026-01-05 00:56:35.977935 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.977945 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.977955 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.977965 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.977975 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.977981 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.977987 | orchestrator | 2026-01-05 00:56:35.977993 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-05 00:56:35.977999 | orchestrator | Monday 05 January 2026 00:48:21 +0000 (0:00:01.012) 0:03:24.690 ******** 2026-01-05 00:56:35.978006 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978012 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978049 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978055 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.978070 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.978077 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.978083 | orchestrator | 2026-01-05 00:56:35.978089 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-05 00:56:35.978095 | orchestrator | Monday 05 January 2026 00:48:24 +0000 (0:00:03.482) 0:03:28.173 ******** 2026-01-05 00:56:35.978102 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.978108 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.978114 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.978120 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978126 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978133 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978139 | orchestrator | 2026-01-05 00:56:35.978145 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-05 00:56:35.978151 | orchestrator | Monday 05 January 2026 00:48:26 +0000 (0:00:01.585) 0:03:29.758 ******** 2026-01-05 00:56:35.978157 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.978165 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.978176 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978186 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.978196 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978205 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978215 | orchestrator | 2026-01-05 00:56:35.978226 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-05 00:56:35.978240 | orchestrator | Monday 05 January 2026 00:48:27 +0000 (0:00:01.386) 0:03:31.144 ******** 2026-01-05 00:56:35.978246 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.978252 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.978259 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.978265 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978271 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978277 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978283 | orchestrator | 2026-01-05 00:56:35.978289 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-05 00:56:35.978295 | orchestrator | Monday 05 January 2026 00:48:28 +0000 (0:00:01.099) 0:03:32.243 ******** 2026-01-05 00:56:35.978302 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.978308 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.978314 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.978321 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978416 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978432 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978442 | orchestrator | 2026-01-05 00:56:35.978453 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-05 00:56:35.978465 | orchestrator | Monday 05 January 2026 00:48:29 +0000 (0:00:01.018) 0:03:33.262 ******** 2026-01-05 00:56:35.978477 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-05 00:56:35.978488 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-05 00:56:35.978496 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-05 00:56:35.978502 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-05 00:56:35.978509 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.978515 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-05 00:56:35.978522 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.978528 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-05 00:56:35.978534 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.978540 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978557 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978564 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978572 | orchestrator | 2026-01-05 00:56:35.978583 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-05 00:56:35.978593 | orchestrator | Monday 05 January 2026 00:48:30 +0000 (0:00:00.950) 0:03:34.212 ******** 2026-01-05 00:56:35.978603 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.978613 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.978623 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.978633 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978645 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978651 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978657 | orchestrator | 2026-01-05 00:56:35.978663 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-05 00:56:35.978669 | orchestrator | Monday 05 January 2026 00:48:31 +0000 (0:00:00.770) 0:03:34.982 ******** 2026-01-05 00:56:35.978676 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.978682 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.978688 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.978694 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978700 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978706 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978712 | orchestrator | 2026-01-05 00:56:35.978721 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 00:56:35.978731 | orchestrator | Monday 05 January 2026 00:48:33 +0000 (0:00:01.393) 0:03:36.376 ******** 2026-01-05 00:56:35.978742 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.978751 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.978762 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.978772 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978782 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978792 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978798 | orchestrator | 2026-01-05 00:56:35.978805 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 00:56:35.978811 | orchestrator | Monday 05 January 2026 00:48:33 +0000 (0:00:00.766) 0:03:37.143 ******** 2026-01-05 00:56:35.978817 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.978823 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.978829 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.978835 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978841 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978847 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978854 | orchestrator | 2026-01-05 00:56:35.978860 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 00:56:35.978896 | orchestrator | Monday 05 January 2026 00:48:34 +0000 (0:00:01.060) 0:03:38.203 ******** 2026-01-05 00:56:35.978904 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.978910 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.978916 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.978923 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978929 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978935 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.978941 | orchestrator | 2026-01-05 00:56:35.978947 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 00:56:35.978954 | orchestrator | Monday 05 January 2026 00:48:35 +0000 (0:00:01.071) 0:03:39.275 ******** 2026-01-05 00:56:35.978960 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.978966 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.978973 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.978979 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.978985 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.978991 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.979004 | orchestrator | 2026-01-05 00:56:35.979010 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 00:56:35.979017 | orchestrator | Monday 05 January 2026 00:48:37 +0000 (0:00:01.667) 0:03:40.942 ******** 2026-01-05 00:56:35.979023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.979029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.979035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.979041 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.979048 | orchestrator | 2026-01-05 00:56:35.979054 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 00:56:35.979060 | orchestrator | Monday 05 January 2026 00:48:38 +0000 (0:00:00.513) 0:03:41.455 ******** 2026-01-05 00:56:35.979066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.979072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.979078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.979085 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.979091 | orchestrator | 2026-01-05 00:56:35.979098 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 00:56:35.979109 | orchestrator | Monday 05 January 2026 00:48:38 +0000 (0:00:00.589) 0:03:42.045 ******** 2026-01-05 00:56:35.979121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.979132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.979144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.979155 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.979167 | orchestrator | 2026-01-05 00:56:35.979179 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 00:56:35.979191 | orchestrator | Monday 05 January 2026 00:48:39 +0000 (0:00:00.413) 0:03:42.459 ******** 2026-01-05 00:56:35.979202 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.979214 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.979224 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.979236 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.979246 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.979258 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.979268 | orchestrator | 2026-01-05 00:56:35.979280 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 00:56:35.979292 | orchestrator | Monday 05 January 2026 00:48:39 +0000 (0:00:00.830) 0:03:43.290 ******** 2026-01-05 00:56:35.979299 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 00:56:35.979305 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 00:56:35.979311 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-05 00:56:35.979318 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 00:56:35.979325 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.979335 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-05 00:56:35.979346 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.979356 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-05 00:56:35.979386 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.979397 | orchestrator | 2026-01-05 00:56:35.979408 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-05 00:56:35.979418 | orchestrator | Monday 05 January 2026 00:48:42 +0000 (0:00:02.976) 0:03:46.266 ******** 2026-01-05 00:56:35.979428 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.979437 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.979448 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.979455 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.979461 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.979470 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.979480 | orchestrator | 2026-01-05 00:56:35.979490 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:56:35.979509 | orchestrator | Monday 05 January 2026 00:48:45 +0000 (0:00:02.688) 0:03:48.955 ******** 2026-01-05 00:56:35.979519 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.979530 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.979537 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.979543 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.979552 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.979562 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.979573 | orchestrator | 2026-01-05 00:56:35.979583 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-05 00:56:35.979593 | orchestrator | Monday 05 January 2026 00:48:46 +0000 (0:00:01.328) 0:03:50.283 ******** 2026-01-05 00:56:35.979604 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.979614 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.979624 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.979633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.979639 | orchestrator | 2026-01-05 00:56:35.979646 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-05 00:56:35.979684 | orchestrator | Monday 05 January 2026 00:48:47 +0000 (0:00:01.020) 0:03:51.303 ******** 2026-01-05 00:56:35.979691 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.979698 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.979704 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.979710 | orchestrator | 2026-01-05 00:56:35.979716 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-05 00:56:35.979722 | orchestrator | Monday 05 January 2026 00:48:48 +0000 (0:00:00.368) 0:03:51.672 ******** 2026-01-05 00:56:35.979728 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.979735 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.979741 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.979747 | orchestrator | 2026-01-05 00:56:35.979753 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-05 00:56:35.979759 | orchestrator | Monday 05 January 2026 00:48:49 +0000 (0:00:01.492) 0:03:53.165 ******** 2026-01-05 00:56:35.979765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:56:35.979772 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:56:35.979778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:56:35.979784 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.979790 | orchestrator | 2026-01-05 00:56:35.979796 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-05 00:56:35.979803 | orchestrator | Monday 05 January 2026 00:48:50 +0000 (0:00:01.070) 0:03:54.235 ******** 2026-01-05 00:56:35.979809 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.979815 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.979821 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.979827 | orchestrator | 2026-01-05 00:56:35.979833 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-05 00:56:35.979840 | orchestrator | Monday 05 January 2026 00:48:51 +0000 (0:00:00.428) 0:03:54.663 ******** 2026-01-05 00:56:35.979846 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.979852 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.979858 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.979864 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.979870 | orchestrator | 2026-01-05 00:56:35.979877 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-05 00:56:35.979883 | orchestrator | Monday 05 January 2026 00:48:52 +0000 (0:00:01.017) 0:03:55.681 ******** 2026-01-05 00:56:35.979889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.979895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.979907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.979913 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.979921 | orchestrator | 2026-01-05 00:56:35.979931 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-05 00:56:35.979947 | orchestrator | Monday 05 January 2026 00:48:52 +0000 (0:00:00.364) 0:03:56.046 ******** 2026-01-05 00:56:35.979959 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.979969 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.979979 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.979989 | orchestrator | 2026-01-05 00:56:35.979999 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-05 00:56:35.980010 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:00.330) 0:03:56.376 ******** 2026-01-05 00:56:35.980020 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980030 | orchestrator | 2026-01-05 00:56:35.980044 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-05 00:56:35.980051 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:00.220) 0:03:56.596 ******** 2026-01-05 00:56:35.980057 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980064 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.980070 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.980076 | orchestrator | 2026-01-05 00:56:35.980082 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-05 00:56:35.980088 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:00.310) 0:03:56.907 ******** 2026-01-05 00:56:35.980094 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980100 | orchestrator | 2026-01-05 00:56:35.980106 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-05 00:56:35.980112 | orchestrator | Monday 05 January 2026 00:48:53 +0000 (0:00:00.215) 0:03:57.122 ******** 2026-01-05 00:56:35.980119 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980125 | orchestrator | 2026-01-05 00:56:35.980131 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-05 00:56:35.980137 | orchestrator | Monday 05 January 2026 00:48:54 +0000 (0:00:00.208) 0:03:57.331 ******** 2026-01-05 00:56:35.980144 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980150 | orchestrator | 2026-01-05 00:56:35.980156 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-05 00:56:35.980162 | orchestrator | Monday 05 January 2026 00:48:54 +0000 (0:00:00.123) 0:03:57.455 ******** 2026-01-05 00:56:35.980169 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980175 | orchestrator | 2026-01-05 00:56:35.980181 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-05 00:56:35.980187 | orchestrator | Monday 05 January 2026 00:48:54 +0000 (0:00:00.572) 0:03:58.027 ******** 2026-01-05 00:56:35.980193 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980200 | orchestrator | 2026-01-05 00:56:35.980206 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-05 00:56:35.980212 | orchestrator | Monday 05 January 2026 00:48:54 +0000 (0:00:00.189) 0:03:58.217 ******** 2026-01-05 00:56:35.980218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.980224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.980230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.980236 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980242 | orchestrator | 2026-01-05 00:56:35.980248 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-05 00:56:35.980284 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:00.377) 0:03:58.594 ******** 2026-01-05 00:56:35.980291 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980297 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.980303 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.980309 | orchestrator | 2026-01-05 00:56:35.980316 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-05 00:56:35.980328 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:00.301) 0:03:58.896 ******** 2026-01-05 00:56:35.980335 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980341 | orchestrator | 2026-01-05 00:56:35.980347 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-05 00:56:35.980353 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:00.220) 0:03:59.117 ******** 2026-01-05 00:56:35.980359 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980386 | orchestrator | 2026-01-05 00:56:35.980392 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-05 00:56:35.980399 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:00.198) 0:03:59.315 ******** 2026-01-05 00:56:35.980405 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.980411 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.980417 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.980423 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.980429 | orchestrator | 2026-01-05 00:56:35.980436 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-05 00:56:35.980442 | orchestrator | Monday 05 January 2026 00:48:56 +0000 (0:00:00.894) 0:04:00.209 ******** 2026-01-05 00:56:35.980448 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.980454 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.980460 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.980466 | orchestrator | 2026-01-05 00:56:35.980472 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-05 00:56:35.980479 | orchestrator | Monday 05 January 2026 00:48:57 +0000 (0:00:00.458) 0:04:00.667 ******** 2026-01-05 00:56:35.980485 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.980491 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.980497 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.980503 | orchestrator | 2026-01-05 00:56:35.980509 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-05 00:56:35.980515 | orchestrator | Monday 05 January 2026 00:48:58 +0000 (0:00:01.225) 0:04:01.892 ******** 2026-01-05 00:56:35.980521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.980527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.980533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.980539 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980545 | orchestrator | 2026-01-05 00:56:35.980552 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-05 00:56:35.980558 | orchestrator | Monday 05 January 2026 00:48:59 +0000 (0:00:00.711) 0:04:02.604 ******** 2026-01-05 00:56:35.980564 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.980571 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.980577 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.980583 | orchestrator | 2026-01-05 00:56:35.980589 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-05 00:56:35.980595 | orchestrator | Monday 05 January 2026 00:48:59 +0000 (0:00:00.479) 0:04:03.083 ******** 2026-01-05 00:56:35.980606 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.980612 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.980618 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.980624 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.980631 | orchestrator | 2026-01-05 00:56:35.980637 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-05 00:56:35.980643 | orchestrator | Monday 05 January 2026 00:49:00 +0000 (0:00:00.815) 0:04:03.898 ******** 2026-01-05 00:56:35.980649 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.980655 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.980661 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.980673 | orchestrator | 2026-01-05 00:56:35.980679 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-05 00:56:35.980685 | orchestrator | Monday 05 January 2026 00:49:01 +0000 (0:00:00.468) 0:04:04.366 ******** 2026-01-05 00:56:35.980692 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.980698 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.980704 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.980710 | orchestrator | 2026-01-05 00:56:35.980716 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-05 00:56:35.980722 | orchestrator | Monday 05 January 2026 00:49:02 +0000 (0:00:01.212) 0:04:05.579 ******** 2026-01-05 00:56:35.980728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.980734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.980740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.980746 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980752 | orchestrator | 2026-01-05 00:56:35.980759 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-05 00:56:35.980765 | orchestrator | Monday 05 January 2026 00:49:02 +0000 (0:00:00.561) 0:04:06.141 ******** 2026-01-05 00:56:35.980771 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.980777 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.980783 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.980789 | orchestrator | 2026-01-05 00:56:35.980795 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-05 00:56:35.980801 | orchestrator | Monday 05 January 2026 00:49:03 +0000 (0:00:00.349) 0:04:06.490 ******** 2026-01-05 00:56:35.980808 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980814 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.980820 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.980826 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.980832 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.980877 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.980892 | orchestrator | 2026-01-05 00:56:35.980901 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-05 00:56:35.980911 | orchestrator | Monday 05 January 2026 00:49:04 +0000 (0:00:00.889) 0:04:07.379 ******** 2026-01-05 00:56:35.980921 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.980931 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.980940 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.980949 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.980959 | orchestrator | 2026-01-05 00:56:35.980969 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-05 00:56:35.980979 | orchestrator | Monday 05 January 2026 00:49:04 +0000 (0:00:00.829) 0:04:08.209 ******** 2026-01-05 00:56:35.980989 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.981000 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.981010 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.981020 | orchestrator | 2026-01-05 00:56:35.981029 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-05 00:56:35.981039 | orchestrator | Monday 05 January 2026 00:49:05 +0000 (0:00:00.662) 0:04:08.871 ******** 2026-01-05 00:56:35.981049 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.981059 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.981069 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.981080 | orchestrator | 2026-01-05 00:56:35.981091 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-05 00:56:35.981101 | orchestrator | Monday 05 January 2026 00:49:07 +0000 (0:00:01.533) 0:04:10.405 ******** 2026-01-05 00:56:35.981112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:56:35.981120 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:56:35.981126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:56:35.981139 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981145 | orchestrator | 2026-01-05 00:56:35.981151 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-05 00:56:35.981157 | orchestrator | Monday 05 January 2026 00:49:07 +0000 (0:00:00.648) 0:04:11.053 ******** 2026-01-05 00:56:35.981164 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.981170 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.981176 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.981182 | orchestrator | 2026-01-05 00:56:35.981188 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-05 00:56:35.981194 | orchestrator | 2026-01-05 00:56:35.981200 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:56:35.981207 | orchestrator | Monday 05 January 2026 00:49:08 +0000 (0:00:00.636) 0:04:11.690 ******** 2026-01-05 00:56:35.981213 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.981220 | orchestrator | 2026-01-05 00:56:35.981227 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:56:35.981233 | orchestrator | Monday 05 January 2026 00:49:09 +0000 (0:00:00.837) 0:04:12.527 ******** 2026-01-05 00:56:35.981239 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.981245 | orchestrator | 2026-01-05 00:56:35.981257 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:56:35.981263 | orchestrator | Monday 05 January 2026 00:49:09 +0000 (0:00:00.533) 0:04:13.061 ******** 2026-01-05 00:56:35.981269 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.981275 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.981281 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.981287 | orchestrator | 2026-01-05 00:56:35.981294 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:56:35.981300 | orchestrator | Monday 05 January 2026 00:49:10 +0000 (0:00:00.973) 0:04:14.034 ******** 2026-01-05 00:56:35.981306 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981312 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981319 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981325 | orchestrator | 2026-01-05 00:56:35.981331 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:56:35.981337 | orchestrator | Monday 05 January 2026 00:49:10 +0000 (0:00:00.269) 0:04:14.304 ******** 2026-01-05 00:56:35.981343 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981350 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981356 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981380 | orchestrator | 2026-01-05 00:56:35.981390 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:56:35.981397 | orchestrator | Monday 05 January 2026 00:49:11 +0000 (0:00:00.297) 0:04:14.602 ******** 2026-01-05 00:56:35.981403 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981409 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981415 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981421 | orchestrator | 2026-01-05 00:56:35.981427 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:56:35.981433 | orchestrator | Monday 05 January 2026 00:49:11 +0000 (0:00:00.290) 0:04:14.893 ******** 2026-01-05 00:56:35.981439 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.981445 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.981452 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.981458 | orchestrator | 2026-01-05 00:56:35.981464 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:56:35.981470 | orchestrator | Monday 05 January 2026 00:49:12 +0000 (0:00:00.900) 0:04:15.793 ******** 2026-01-05 00:56:35.981476 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981482 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981494 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981500 | orchestrator | 2026-01-05 00:56:35.981506 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:56:35.981513 | orchestrator | Monday 05 January 2026 00:49:12 +0000 (0:00:00.301) 0:04:16.095 ******** 2026-01-05 00:56:35.981550 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981557 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981564 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981570 | orchestrator | 2026-01-05 00:56:35.981576 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:56:35.981582 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.342) 0:04:16.437 ******** 2026-01-05 00:56:35.981588 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.981595 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.981601 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.981607 | orchestrator | 2026-01-05 00:56:35.981613 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:56:35.981620 | orchestrator | Monday 05 January 2026 00:49:13 +0000 (0:00:00.681) 0:04:17.119 ******** 2026-01-05 00:56:35.981626 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.981632 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.981638 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.981644 | orchestrator | 2026-01-05 00:56:35.981651 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:56:35.981657 | orchestrator | Monday 05 January 2026 00:49:14 +0000 (0:00:00.823) 0:04:17.942 ******** 2026-01-05 00:56:35.981663 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981670 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981676 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981682 | orchestrator | 2026-01-05 00:56:35.981688 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:56:35.981694 | orchestrator | Monday 05 January 2026 00:49:14 +0000 (0:00:00.273) 0:04:18.216 ******** 2026-01-05 00:56:35.981701 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.981707 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.981713 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.981719 | orchestrator | 2026-01-05 00:56:35.981726 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:56:35.981732 | orchestrator | Monday 05 January 2026 00:49:15 +0000 (0:00:00.316) 0:04:18.532 ******** 2026-01-05 00:56:35.981738 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981744 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981751 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981757 | orchestrator | 2026-01-05 00:56:35.981763 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:56:35.981769 | orchestrator | Monday 05 January 2026 00:49:15 +0000 (0:00:00.301) 0:04:18.833 ******** 2026-01-05 00:56:35.981775 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981782 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981788 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981794 | orchestrator | 2026-01-05 00:56:35.981800 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:56:35.981806 | orchestrator | Monday 05 January 2026 00:49:15 +0000 (0:00:00.295) 0:04:19.129 ******** 2026-01-05 00:56:35.981812 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981819 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981825 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981831 | orchestrator | 2026-01-05 00:56:35.981837 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:56:35.981843 | orchestrator | Monday 05 January 2026 00:49:16 +0000 (0:00:00.502) 0:04:19.631 ******** 2026-01-05 00:56:35.981849 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981855 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981862 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981876 | orchestrator | 2026-01-05 00:56:35.981908 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:56:35.981915 | orchestrator | Monday 05 January 2026 00:49:16 +0000 (0:00:00.336) 0:04:19.967 ******** 2026-01-05 00:56:35.981921 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.981928 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.981934 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.981940 | orchestrator | 2026-01-05 00:56:35.981946 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:56:35.981952 | orchestrator | Monday 05 January 2026 00:49:16 +0000 (0:00:00.341) 0:04:20.309 ******** 2026-01-05 00:56:35.981958 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.981965 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.981971 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.981977 | orchestrator | 2026-01-05 00:56:35.981983 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:56:35.981989 | orchestrator | Monday 05 January 2026 00:49:17 +0000 (0:00:00.381) 0:04:20.690 ******** 2026-01-05 00:56:35.981995 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.982001 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.982008 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.982045 | orchestrator | 2026-01-05 00:56:35.982054 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:56:35.982060 | orchestrator | Monday 05 January 2026 00:49:17 +0000 (0:00:00.619) 0:04:21.310 ******** 2026-01-05 00:56:35.982066 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.982072 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.982078 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.982084 | orchestrator | 2026-01-05 00:56:35.982090 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-05 00:56:35.982096 | orchestrator | Monday 05 January 2026 00:49:18 +0000 (0:00:00.501) 0:04:21.812 ******** 2026-01-05 00:56:35.982102 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.982109 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.982115 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.982121 | orchestrator | 2026-01-05 00:56:35.982127 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-05 00:56:35.982133 | orchestrator | Monday 05 January 2026 00:49:18 +0000 (0:00:00.287) 0:04:22.099 ******** 2026-01-05 00:56:35.982139 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.982146 | orchestrator | 2026-01-05 00:56:35.982152 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-05 00:56:35.982158 | orchestrator | Monday 05 January 2026 00:49:19 +0000 (0:00:00.669) 0:04:22.768 ******** 2026-01-05 00:56:35.982164 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.982170 | orchestrator | 2026-01-05 00:56:35.982200 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-05 00:56:35.982208 | orchestrator | Monday 05 January 2026 00:49:19 +0000 (0:00:00.148) 0:04:22.917 ******** 2026-01-05 00:56:35.982214 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:56:35.982220 | orchestrator | 2026-01-05 00:56:35.982226 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-05 00:56:35.982233 | orchestrator | Monday 05 January 2026 00:49:20 +0000 (0:00:00.972) 0:04:23.889 ******** 2026-01-05 00:56:35.982239 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.982245 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.982251 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.982257 | orchestrator | 2026-01-05 00:56:35.982263 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-05 00:56:35.982269 | orchestrator | Monday 05 January 2026 00:49:20 +0000 (0:00:00.305) 0:04:24.195 ******** 2026-01-05 00:56:35.982275 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.982282 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.982288 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.982300 | orchestrator | 2026-01-05 00:56:35.982306 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-05 00:56:35.982312 | orchestrator | Monday 05 January 2026 00:49:21 +0000 (0:00:00.309) 0:04:24.504 ******** 2026-01-05 00:56:35.982318 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.982324 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.982331 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.982337 | orchestrator | 2026-01-05 00:56:35.982343 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-05 00:56:35.982349 | orchestrator | Monday 05 January 2026 00:49:22 +0000 (0:00:01.439) 0:04:25.944 ******** 2026-01-05 00:56:35.982355 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.982361 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.982412 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.982419 | orchestrator | 2026-01-05 00:56:35.982425 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-05 00:56:35.982431 | orchestrator | Monday 05 January 2026 00:49:23 +0000 (0:00:00.778) 0:04:26.722 ******** 2026-01-05 00:56:35.982438 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.982444 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.982450 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.982456 | orchestrator | 2026-01-05 00:56:35.982462 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-05 00:56:35.982468 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.665) 0:04:27.388 ******** 2026-01-05 00:56:35.982474 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.982480 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.982487 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.982493 | orchestrator | 2026-01-05 00:56:35.982499 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-05 00:56:35.982505 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.697) 0:04:28.086 ******** 2026-01-05 00:56:35.982511 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.982517 | orchestrator | 2026-01-05 00:56:35.982523 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-05 00:56:35.982530 | orchestrator | Monday 05 January 2026 00:49:26 +0000 (0:00:01.521) 0:04:29.607 ******** 2026-01-05 00:56:35.982536 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.982542 | orchestrator | 2026-01-05 00:56:35.982548 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-05 00:56:35.982558 | orchestrator | Monday 05 January 2026 00:49:27 +0000 (0:00:01.265) 0:04:30.873 ******** 2026-01-05 00:56:35.982565 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:56:35.982571 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.982577 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.982583 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:56:35.982589 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:56:35.982596 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-05 00:56:35.982602 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-05 00:56:35.982608 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-01-05 00:56:35.982614 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:56:35.982620 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-05 00:56:35.982627 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:56:35.982633 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-05 00:56:35.982639 | orchestrator | 2026-01-05 00:56:35.982645 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-05 00:56:35.982651 | orchestrator | Monday 05 January 2026 00:49:30 +0000 (0:00:03.246) 0:04:34.119 ******** 2026-01-05 00:56:35.982657 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.982683 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.982690 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.982696 | orchestrator | 2026-01-05 00:56:35.982703 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-05 00:56:35.982709 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:01.259) 0:04:35.379 ******** 2026-01-05 00:56:35.982715 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.982722 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.982728 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.982734 | orchestrator | 2026-01-05 00:56:35.982740 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-05 00:56:35.982746 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:00.319) 0:04:35.699 ******** 2026-01-05 00:56:35.982752 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.982759 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.982765 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.982771 | orchestrator | 2026-01-05 00:56:35.982777 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-05 00:56:35.982783 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:00.463) 0:04:36.163 ******** 2026-01-05 00:56:35.982815 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.982822 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.982828 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.982835 | orchestrator | 2026-01-05 00:56:35.982841 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-05 00:56:35.982847 | orchestrator | Monday 05 January 2026 00:49:34 +0000 (0:00:01.566) 0:04:37.729 ******** 2026-01-05 00:56:35.982853 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.982859 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.982865 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.982871 | orchestrator | 2026-01-05 00:56:35.982878 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-05 00:56:35.982884 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:01.401) 0:04:39.131 ******** 2026-01-05 00:56:35.982890 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.982896 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.982902 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.982908 | orchestrator | 2026-01-05 00:56:35.982915 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-05 00:56:35.982921 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:00.662) 0:04:39.794 ******** 2026-01-05 00:56:35.982927 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.982933 | orchestrator | 2026-01-05 00:56:35.982939 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-05 00:56:35.982945 | orchestrator | Monday 05 January 2026 00:49:37 +0000 (0:00:00.687) 0:04:40.482 ******** 2026-01-05 00:56:35.982952 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.982958 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.982964 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.982969 | orchestrator | 2026-01-05 00:56:35.982975 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-05 00:56:35.982980 | orchestrator | Monday 05 January 2026 00:49:37 +0000 (0:00:00.406) 0:04:40.888 ******** 2026-01-05 00:56:35.982985 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.982991 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.982996 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.983002 | orchestrator | 2026-01-05 00:56:35.983007 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-05 00:56:35.983012 | orchestrator | Monday 05 January 2026 00:49:37 +0000 (0:00:00.319) 0:04:41.208 ******** 2026-01-05 00:56:35.983018 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.983023 | orchestrator | 2026-01-05 00:56:35.983029 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-05 00:56:35.983038 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:00.665) 0:04:41.874 ******** 2026-01-05 00:56:35.983043 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.983049 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.983054 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.983060 | orchestrator | 2026-01-05 00:56:35.983065 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-05 00:56:35.983070 | orchestrator | Monday 05 January 2026 00:49:40 +0000 (0:00:01.889) 0:04:43.764 ******** 2026-01-05 00:56:35.983076 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.983081 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.983086 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.983092 | orchestrator | 2026-01-05 00:56:35.983101 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-05 00:56:35.983107 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:01.515) 0:04:45.279 ******** 2026-01-05 00:56:35.983112 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.983117 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.983123 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.983128 | orchestrator | 2026-01-05 00:56:35.983134 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-05 00:56:35.983139 | orchestrator | Monday 05 January 2026 00:49:44 +0000 (0:00:02.061) 0:04:47.341 ******** 2026-01-05 00:56:35.983144 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.983150 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.983155 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.983160 | orchestrator | 2026-01-05 00:56:35.983166 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-05 00:56:35.983171 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:02.010) 0:04:49.351 ******** 2026-01-05 00:56:35.983177 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.983182 | orchestrator | 2026-01-05 00:56:35.983187 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-05 00:56:35.983193 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:00.574) 0:04:49.925 ******** 2026-01-05 00:56:35.983198 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-05 00:56:35.983204 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.983209 | orchestrator | 2026-01-05 00:56:35.983214 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-05 00:56:35.983220 | orchestrator | Monday 05 January 2026 00:50:08 +0000 (0:00:21.904) 0:05:11.830 ******** 2026-01-05 00:56:35.983225 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.983231 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.983236 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.983241 | orchestrator | 2026-01-05 00:56:35.983247 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-05 00:56:35.983252 | orchestrator | Monday 05 January 2026 00:50:18 +0000 (0:00:10.290) 0:05:22.120 ******** 2026-01-05 00:56:35.983258 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.983263 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.983268 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.983274 | orchestrator | 2026-01-05 00:56:35.983279 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-05 00:56:35.983303 | orchestrator | Monday 05 January 2026 00:50:19 +0000 (0:00:00.473) 0:05:22.594 ******** 2026-01-05 00:56:35.983311 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__189a2aac218eda2ae0793bbb89159aeccf02b13e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-05 00:56:35.983327 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__189a2aac218eda2ae0793bbb89159aeccf02b13e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-05 00:56:35.983334 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__189a2aac218eda2ae0793bbb89159aeccf02b13e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-05 00:56:35.983342 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__189a2aac218eda2ae0793bbb89159aeccf02b13e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-05 00:56:35.983347 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__189a2aac218eda2ae0793bbb89159aeccf02b13e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-05 00:56:35.983354 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__189a2aac218eda2ae0793bbb89159aeccf02b13e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__189a2aac218eda2ae0793bbb89159aeccf02b13e'}])  2026-01-05 00:56:35.983361 | orchestrator | 2026-01-05 00:56:35.983383 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:56:35.983389 | orchestrator | Monday 05 January 2026 00:50:33 +0000 (0:00:14.358) 0:05:36.953 ******** 2026-01-05 00:56:35.983394 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.983400 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.983405 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.983411 | orchestrator | 2026-01-05 00:56:35.983416 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-05 00:56:35.983421 | orchestrator | Monday 05 January 2026 00:50:33 +0000 (0:00:00.289) 0:05:37.242 ******** 2026-01-05 00:56:35.983427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.983432 | orchestrator | 2026-01-05 00:56:35.983438 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-05 00:56:35.983443 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.831) 0:05:38.074 ******** 2026-01-05 00:56:35.983449 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.983454 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.983460 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.983468 | orchestrator | 2026-01-05 00:56:35.983477 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-05 00:56:35.983483 | orchestrator | Monday 05 January 2026 00:50:35 +0000 (0:00:00.370) 0:05:38.444 ******** 2026-01-05 00:56:35.983488 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.983494 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.983499 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.983504 | orchestrator | 2026-01-05 00:56:35.983510 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-05 00:56:35.983515 | orchestrator | Monday 05 January 2026 00:50:35 +0000 (0:00:00.570) 0:05:39.015 ******** 2026-01-05 00:56:35.983525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:56:35.983531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:56:35.983536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:56:35.983541 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.983547 | orchestrator | 2026-01-05 00:56:35.983552 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-05 00:56:35.983558 | orchestrator | Monday 05 January 2026 00:50:36 +0000 (0:00:00.953) 0:05:39.968 ******** 2026-01-05 00:56:35.983563 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.983587 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.983593 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.983599 | orchestrator | 2026-01-05 00:56:35.983604 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-05 00:56:35.983613 | orchestrator | 2026-01-05 00:56:35.983623 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:56:35.983633 | orchestrator | Monday 05 January 2026 00:50:37 +0000 (0:00:00.978) 0:05:40.946 ******** 2026-01-05 00:56:35.983642 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.983652 | orchestrator | 2026-01-05 00:56:35.983661 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:56:35.983669 | orchestrator | Monday 05 January 2026 00:50:38 +0000 (0:00:00.651) 0:05:41.597 ******** 2026-01-05 00:56:35.983679 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.983688 | orchestrator | 2026-01-05 00:56:35.983697 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:56:35.983706 | orchestrator | Monday 05 January 2026 00:50:39 +0000 (0:00:01.026) 0:05:42.624 ******** 2026-01-05 00:56:35.983715 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.983723 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.983733 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.983742 | orchestrator | 2026-01-05 00:56:35.983751 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:56:35.983760 | orchestrator | Monday 05 January 2026 00:50:40 +0000 (0:00:00.797) 0:05:43.421 ******** 2026-01-05 00:56:35.983861 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.983883 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.983888 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.983894 | orchestrator | 2026-01-05 00:56:35.983900 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:56:35.983906 | orchestrator | Monday 05 January 2026 00:50:40 +0000 (0:00:00.348) 0:05:43.770 ******** 2026-01-05 00:56:35.983911 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.983920 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.983928 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.983943 | orchestrator | 2026-01-05 00:56:35.983953 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:56:35.983962 | orchestrator | Monday 05 January 2026 00:50:41 +0000 (0:00:00.682) 0:05:44.453 ******** 2026-01-05 00:56:35.983970 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.983978 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.983986 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.983995 | orchestrator | 2026-01-05 00:56:35.984003 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:56:35.984012 | orchestrator | Monday 05 January 2026 00:50:41 +0000 (0:00:00.383) 0:05:44.836 ******** 2026-01-05 00:56:35.984021 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.984030 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.984038 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.984047 | orchestrator | 2026-01-05 00:56:35.984056 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:56:35.984074 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:00.739) 0:05:45.575 ******** 2026-01-05 00:56:35.984083 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.984089 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.984098 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.984104 | orchestrator | 2026-01-05 00:56:35.984109 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:56:35.984115 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:00.371) 0:05:45.947 ******** 2026-01-05 00:56:35.984120 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.984125 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.984131 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.984136 | orchestrator | 2026-01-05 00:56:35.984141 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:56:35.984147 | orchestrator | Monday 05 January 2026 00:50:43 +0000 (0:00:00.686) 0:05:46.634 ******** 2026-01-05 00:56:35.984152 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.984158 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.984163 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.984168 | orchestrator | 2026-01-05 00:56:35.984174 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:56:35.984179 | orchestrator | Monday 05 January 2026 00:50:44 +0000 (0:00:00.748) 0:05:47.383 ******** 2026-01-05 00:56:35.984185 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.984190 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.984196 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.984201 | orchestrator | 2026-01-05 00:56:35.984206 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:56:35.984212 | orchestrator | Monday 05 January 2026 00:50:44 +0000 (0:00:00.799) 0:05:48.183 ******** 2026-01-05 00:56:35.984217 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.984222 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.984228 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.984233 | orchestrator | 2026-01-05 00:56:35.984239 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:56:35.984244 | orchestrator | Monday 05 January 2026 00:50:45 +0000 (0:00:00.313) 0:05:48.496 ******** 2026-01-05 00:56:35.984249 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.984255 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.984260 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.984265 | orchestrator | 2026-01-05 00:56:35.984271 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:56:35.984276 | orchestrator | Monday 05 January 2026 00:50:45 +0000 (0:00:00.412) 0:05:48.909 ******** 2026-01-05 00:56:35.984282 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.984287 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.984292 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.984298 | orchestrator | 2026-01-05 00:56:35.984303 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:56:35.984343 | orchestrator | Monday 05 January 2026 00:50:46 +0000 (0:00:00.708) 0:05:49.618 ******** 2026-01-05 00:56:35.984350 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.984355 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.984361 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.984410 | orchestrator | 2026-01-05 00:56:35.984416 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:56:35.984422 | orchestrator | Monday 05 January 2026 00:50:46 +0000 (0:00:00.322) 0:05:49.941 ******** 2026-01-05 00:56:35.984427 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.984432 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.984438 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.984443 | orchestrator | 2026-01-05 00:56:35.984448 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:56:35.984454 | orchestrator | Monday 05 January 2026 00:50:46 +0000 (0:00:00.309) 0:05:50.250 ******** 2026-01-05 00:56:35.984464 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.984470 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.984475 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.984480 | orchestrator | 2026-01-05 00:56:35.984486 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:56:35.984491 | orchestrator | Monday 05 January 2026 00:50:47 +0000 (0:00:00.381) 0:05:50.632 ******** 2026-01-05 00:56:35.984497 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.984502 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.984508 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.984517 | orchestrator | 2026-01-05 00:56:35.984529 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:56:35.984541 | orchestrator | Monday 05 January 2026 00:50:47 +0000 (0:00:00.569) 0:05:51.202 ******** 2026-01-05 00:56:35.984549 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.984558 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.984566 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.984575 | orchestrator | 2026-01-05 00:56:35.984583 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:56:35.984591 | orchestrator | Monday 05 January 2026 00:50:48 +0000 (0:00:00.329) 0:05:51.531 ******** 2026-01-05 00:56:35.984600 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.984608 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.984617 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.984625 | orchestrator | 2026-01-05 00:56:35.984635 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:56:35.984644 | orchestrator | Monday 05 January 2026 00:50:48 +0000 (0:00:00.332) 0:05:51.863 ******** 2026-01-05 00:56:35.984653 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.984662 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.984670 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.984679 | orchestrator | 2026-01-05 00:56:35.984687 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-05 00:56:35.984695 | orchestrator | Monday 05 January 2026 00:50:49 +0000 (0:00:00.881) 0:05:52.745 ******** 2026-01-05 00:56:35.984701 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:56:35.984706 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:56:35.984712 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:56:35.984716 | orchestrator | 2026-01-05 00:56:35.984721 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-05 00:56:35.984726 | orchestrator | Monday 05 January 2026 00:50:50 +0000 (0:00:00.649) 0:05:53.394 ******** 2026-01-05 00:56:35.984735 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.984741 | orchestrator | 2026-01-05 00:56:35.984745 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-05 00:56:35.984750 | orchestrator | Monday 05 January 2026 00:50:50 +0000 (0:00:00.619) 0:05:54.014 ******** 2026-01-05 00:56:35.984755 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.984760 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.984765 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.984769 | orchestrator | 2026-01-05 00:56:35.984774 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-05 00:56:35.984779 | orchestrator | Monday 05 January 2026 00:50:51 +0000 (0:00:00.703) 0:05:54.718 ******** 2026-01-05 00:56:35.984784 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.984789 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.984793 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.984798 | orchestrator | 2026-01-05 00:56:35.984803 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-05 00:56:35.984808 | orchestrator | Monday 05 January 2026 00:50:52 +0000 (0:00:00.680) 0:05:55.398 ******** 2026-01-05 00:56:35.984819 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:56:35.984824 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:56:35.984829 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:56:35.984833 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-05 00:56:35.984838 | orchestrator | 2026-01-05 00:56:35.984843 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-05 00:56:35.984847 | orchestrator | Monday 05 January 2026 00:51:02 +0000 (0:00:10.695) 0:06:06.094 ******** 2026-01-05 00:56:35.984852 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.984857 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.984862 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.984866 | orchestrator | 2026-01-05 00:56:35.984871 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-05 00:56:35.984876 | orchestrator | Monday 05 January 2026 00:51:03 +0000 (0:00:00.391) 0:06:06.486 ******** 2026-01-05 00:56:35.984881 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 00:56:35.984885 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 00:56:35.984890 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 00:56:35.984896 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-05 00:56:35.984905 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.984937 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.984943 | orchestrator | 2026-01-05 00:56:35.984948 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:56:35.984952 | orchestrator | Monday 05 January 2026 00:51:05 +0000 (0:00:02.187) 0:06:08.673 ******** 2026-01-05 00:56:35.984957 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 00:56:35.984962 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 00:56:35.984967 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 00:56:35.984972 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:56:35.984976 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 00:56:35.984981 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-05 00:56:35.984986 | orchestrator | 2026-01-05 00:56:35.984990 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-05 00:56:35.984995 | orchestrator | Monday 05 January 2026 00:51:06 +0000 (0:00:01.305) 0:06:09.979 ******** 2026-01-05 00:56:35.985000 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.985005 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.985010 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.985014 | orchestrator | 2026-01-05 00:56:35.985019 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-05 00:56:35.985024 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:01.025) 0:06:11.005 ******** 2026-01-05 00:56:35.985029 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.985034 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.985038 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.985043 | orchestrator | 2026-01-05 00:56:35.985048 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-05 00:56:35.985053 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:00.312) 0:06:11.317 ******** 2026-01-05 00:56:35.985057 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.985062 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.985067 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.985072 | orchestrator | 2026-01-05 00:56:35.985076 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-05 00:56:35.985081 | orchestrator | Monday 05 January 2026 00:51:08 +0000 (0:00:00.321) 0:06:11.639 ******** 2026-01-05 00:56:35.985086 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.985091 | orchestrator | 2026-01-05 00:56:35.985100 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-05 00:56:35.985105 | orchestrator | Monday 05 January 2026 00:51:09 +0000 (0:00:00.797) 0:06:12.436 ******** 2026-01-05 00:56:35.985110 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.985114 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.985119 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.985124 | orchestrator | 2026-01-05 00:56:35.985129 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-05 00:56:35.985133 | orchestrator | Monday 05 January 2026 00:51:09 +0000 (0:00:00.447) 0:06:12.883 ******** 2026-01-05 00:56:35.985138 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.985143 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.985147 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.985152 | orchestrator | 2026-01-05 00:56:35.985157 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-05 00:56:35.985165 | orchestrator | Monday 05 January 2026 00:51:09 +0000 (0:00:00.377) 0:06:13.261 ******** 2026-01-05 00:56:35.985170 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.985175 | orchestrator | 2026-01-05 00:56:35.985180 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-05 00:56:35.985184 | orchestrator | Monday 05 January 2026 00:51:10 +0000 (0:00:00.858) 0:06:14.120 ******** 2026-01-05 00:56:35.985189 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.985194 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.985199 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.985203 | orchestrator | 2026-01-05 00:56:35.985208 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-05 00:56:35.985213 | orchestrator | Monday 05 January 2026 00:51:12 +0000 (0:00:01.285) 0:06:15.405 ******** 2026-01-05 00:56:35.985218 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.985222 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.985227 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.985232 | orchestrator | 2026-01-05 00:56:35.985237 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-05 00:56:35.985244 | orchestrator | Monday 05 January 2026 00:51:13 +0000 (0:00:01.178) 0:06:16.584 ******** 2026-01-05 00:56:35.985254 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.985266 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.985274 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.985282 | orchestrator | 2026-01-05 00:56:35.985290 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-05 00:56:35.985298 | orchestrator | Monday 05 January 2026 00:51:15 +0000 (0:00:01.746) 0:06:18.331 ******** 2026-01-05 00:56:35.985306 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.985314 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.985322 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.985330 | orchestrator | 2026-01-05 00:56:35.985338 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-05 00:56:35.985347 | orchestrator | Monday 05 January 2026 00:51:17 +0000 (0:00:02.043) 0:06:20.375 ******** 2026-01-05 00:56:35.985355 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.985380 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.985388 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-05 00:56:35.985396 | orchestrator | 2026-01-05 00:56:35.985404 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-05 00:56:35.985415 | orchestrator | Monday 05 January 2026 00:51:17 +0000 (0:00:00.760) 0:06:21.135 ******** 2026-01-05 00:56:35.985454 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-05 00:56:35.985463 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-05 00:56:35.985477 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-05 00:56:35.985486 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-05 00:56:35.985494 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.985501 | orchestrator | 2026-01-05 00:56:35.985510 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-05 00:56:35.985518 | orchestrator | Monday 05 January 2026 00:51:41 +0000 (0:00:24.133) 0:06:45.269 ******** 2026-01-05 00:56:35.985525 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.985534 | orchestrator | 2026-01-05 00:56:35.985539 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-05 00:56:35.985544 | orchestrator | Monday 05 January 2026 00:51:43 +0000 (0:00:01.251) 0:06:46.521 ******** 2026-01-05 00:56:35.985549 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.985554 | orchestrator | 2026-01-05 00:56:35.985559 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-05 00:56:35.985564 | orchestrator | Monday 05 January 2026 00:51:43 +0000 (0:00:00.327) 0:06:46.849 ******** 2026-01-05 00:56:35.985568 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.985573 | orchestrator | 2026-01-05 00:56:35.985578 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-05 00:56:35.985583 | orchestrator | Monday 05 January 2026 00:51:43 +0000 (0:00:00.166) 0:06:47.015 ******** 2026-01-05 00:56:35.985588 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-05 00:56:35.985593 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-05 00:56:35.985597 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-05 00:56:35.985602 | orchestrator | 2026-01-05 00:56:35.985607 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-05 00:56:35.985611 | orchestrator | Monday 05 January 2026 00:51:50 +0000 (0:00:07.162) 0:06:54.178 ******** 2026-01-05 00:56:35.985616 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-05 00:56:35.985621 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-05 00:56:35.985626 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-05 00:56:35.985631 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-05 00:56:35.985635 | orchestrator | 2026-01-05 00:56:35.985640 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:56:35.985645 | orchestrator | Monday 05 January 2026 00:51:56 +0000 (0:00:05.168) 0:06:59.346 ******** 2026-01-05 00:56:35.985650 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.985654 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.985659 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.985664 | orchestrator | 2026-01-05 00:56:35.985669 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-05 00:56:35.985677 | orchestrator | Monday 05 January 2026 00:51:56 +0000 (0:00:00.639) 0:06:59.985 ******** 2026-01-05 00:56:35.985683 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.985687 | orchestrator | 2026-01-05 00:56:35.985692 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-05 00:56:35.985697 | orchestrator | Monday 05 January 2026 00:51:57 +0000 (0:00:00.511) 0:07:00.497 ******** 2026-01-05 00:56:35.985702 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.985706 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.985711 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.985716 | orchestrator | 2026-01-05 00:56:35.985721 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-05 00:56:35.985725 | orchestrator | Monday 05 January 2026 00:51:57 +0000 (0:00:00.501) 0:07:00.999 ******** 2026-01-05 00:56:35.985737 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.985741 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.985746 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.985751 | orchestrator | 2026-01-05 00:56:35.985756 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-05 00:56:35.985760 | orchestrator | Monday 05 January 2026 00:51:58 +0000 (0:00:01.191) 0:07:02.190 ******** 2026-01-05 00:56:35.985765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:56:35.985770 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:56:35.985775 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:56:35.985779 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.985784 | orchestrator | 2026-01-05 00:56:35.985789 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-05 00:56:35.985794 | orchestrator | Monday 05 January 2026 00:51:59 +0000 (0:00:00.606) 0:07:02.796 ******** 2026-01-05 00:56:35.985798 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.985803 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.985808 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.985813 | orchestrator | 2026-01-05 00:56:35.985817 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-05 00:56:35.985822 | orchestrator | 2026-01-05 00:56:35.985827 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:56:35.985832 | orchestrator | Monday 05 January 2026 00:52:00 +0000 (0:00:00.935) 0:07:03.731 ******** 2026-01-05 00:56:35.985837 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.985842 | orchestrator | 2026-01-05 00:56:35.985870 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:56:35.985876 | orchestrator | Monday 05 January 2026 00:52:01 +0000 (0:00:00.631) 0:07:04.363 ******** 2026-01-05 00:56:35.985881 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-01-05 00:56:35.985885 | orchestrator | 2026-01-05 00:56:35.985890 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:56:35.985895 | orchestrator | Monday 05 January 2026 00:52:01 +0000 (0:00:00.875) 0:07:05.239 ******** 2026-01-05 00:56:35.985900 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.985904 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.985909 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.985914 | orchestrator | 2026-01-05 00:56:35.985919 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:56:35.985924 | orchestrator | Monday 05 January 2026 00:52:02 +0000 (0:00:00.407) 0:07:05.647 ******** 2026-01-05 00:56:35.985928 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.985933 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.985938 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.985942 | orchestrator | 2026-01-05 00:56:35.985947 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:56:35.985952 | orchestrator | Monday 05 January 2026 00:52:03 +0000 (0:00:00.693) 0:07:06.340 ******** 2026-01-05 00:56:35.985957 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.985962 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.985966 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.985971 | orchestrator | 2026-01-05 00:56:35.985976 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:56:35.985981 | orchestrator | Monday 05 January 2026 00:52:03 +0000 (0:00:00.701) 0:07:07.042 ******** 2026-01-05 00:56:35.985985 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.985990 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.985995 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.985999 | orchestrator | 2026-01-05 00:56:35.986004 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:56:35.986076 | orchestrator | Monday 05 January 2026 00:52:04 +0000 (0:00:00.982) 0:07:08.024 ******** 2026-01-05 00:56:35.986083 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986088 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986093 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986098 | orchestrator | 2026-01-05 00:56:35.986102 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:56:35.986107 | orchestrator | Monday 05 January 2026 00:52:05 +0000 (0:00:00.319) 0:07:08.343 ******** 2026-01-05 00:56:35.986112 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986117 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986122 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986126 | orchestrator | 2026-01-05 00:56:35.986131 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:56:35.986136 | orchestrator | Monday 05 January 2026 00:52:05 +0000 (0:00:00.358) 0:07:08.702 ******** 2026-01-05 00:56:35.986141 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986146 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986150 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986155 | orchestrator | 2026-01-05 00:56:35.986160 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:56:35.986168 | orchestrator | Monday 05 January 2026 00:52:05 +0000 (0:00:00.330) 0:07:09.033 ******** 2026-01-05 00:56:35.986173 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986178 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986183 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986188 | orchestrator | 2026-01-05 00:56:35.986193 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:56:35.986201 | orchestrator | Monday 05 January 2026 00:52:06 +0000 (0:00:00.671) 0:07:09.705 ******** 2026-01-05 00:56:35.986209 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986216 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986224 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986231 | orchestrator | 2026-01-05 00:56:35.986240 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:56:35.986248 | orchestrator | Monday 05 January 2026 00:52:07 +0000 (0:00:00.982) 0:07:10.687 ******** 2026-01-05 00:56:35.986255 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986264 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986269 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986274 | orchestrator | 2026-01-05 00:56:35.986279 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:56:35.986284 | orchestrator | Monday 05 January 2026 00:52:07 +0000 (0:00:00.367) 0:07:11.054 ******** 2026-01-05 00:56:35.986288 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986293 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986298 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986303 | orchestrator | 2026-01-05 00:56:35.986307 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:56:35.986312 | orchestrator | Monday 05 January 2026 00:52:08 +0000 (0:00:00.316) 0:07:11.371 ******** 2026-01-05 00:56:35.986317 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986321 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986326 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986331 | orchestrator | 2026-01-05 00:56:35.986336 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:56:35.986342 | orchestrator | Monday 05 January 2026 00:52:08 +0000 (0:00:00.336) 0:07:11.707 ******** 2026-01-05 00:56:35.986350 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986357 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986384 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986391 | orchestrator | 2026-01-05 00:56:35.986398 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:56:35.986406 | orchestrator | Monday 05 January 2026 00:52:09 +0000 (0:00:00.691) 0:07:12.399 ******** 2026-01-05 00:56:35.986419 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986427 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986434 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986441 | orchestrator | 2026-01-05 00:56:35.986450 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:56:35.986463 | orchestrator | Monday 05 January 2026 00:52:09 +0000 (0:00:00.351) 0:07:12.750 ******** 2026-01-05 00:56:35.986471 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986479 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986487 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986495 | orchestrator | 2026-01-05 00:56:35.986502 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:56:35.986506 | orchestrator | Monday 05 January 2026 00:52:09 +0000 (0:00:00.347) 0:07:13.098 ******** 2026-01-05 00:56:35.986511 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986516 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986521 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986525 | orchestrator | 2026-01-05 00:56:35.986530 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:56:35.986535 | orchestrator | Monday 05 January 2026 00:52:10 +0000 (0:00:00.362) 0:07:13.460 ******** 2026-01-05 00:56:35.986539 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986544 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986549 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986553 | orchestrator | 2026-01-05 00:56:35.986558 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:56:35.986563 | orchestrator | Monday 05 January 2026 00:52:10 +0000 (0:00:00.617) 0:07:14.077 ******** 2026-01-05 00:56:35.986568 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986574 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986582 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986590 | orchestrator | 2026-01-05 00:56:35.986598 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:56:35.986606 | orchestrator | Monday 05 January 2026 00:52:11 +0000 (0:00:00.344) 0:07:14.422 ******** 2026-01-05 00:56:35.986614 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986619 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986623 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986628 | orchestrator | 2026-01-05 00:56:35.986633 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-05 00:56:35.986638 | orchestrator | Monday 05 January 2026 00:52:11 +0000 (0:00:00.579) 0:07:15.001 ******** 2026-01-05 00:56:35.986642 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986647 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986652 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986656 | orchestrator | 2026-01-05 00:56:35.986662 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-05 00:56:35.986669 | orchestrator | Monday 05 January 2026 00:52:12 +0000 (0:00:00.651) 0:07:15.652 ******** 2026-01-05 00:56:35.986677 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:56:35.986685 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:56:35.986693 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:56:35.986702 | orchestrator | 2026-01-05 00:56:35.986709 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-05 00:56:35.986721 | orchestrator | Monday 05 January 2026 00:52:12 +0000 (0:00:00.652) 0:07:16.304 ******** 2026-01-05 00:56:35.986728 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.986733 | orchestrator | 2026-01-05 00:56:35.986746 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-05 00:56:35.986751 | orchestrator | Monday 05 January 2026 00:52:13 +0000 (0:00:00.543) 0:07:16.848 ******** 2026-01-05 00:56:35.986760 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986765 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986770 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986774 | orchestrator | 2026-01-05 00:56:35.986779 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-05 00:56:35.986784 | orchestrator | Monday 05 January 2026 00:52:14 +0000 (0:00:00.567) 0:07:17.416 ******** 2026-01-05 00:56:35.986789 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.986793 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.986798 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.986803 | orchestrator | 2026-01-05 00:56:35.986808 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-05 00:56:35.986812 | orchestrator | Monday 05 January 2026 00:52:14 +0000 (0:00:00.329) 0:07:17.746 ******** 2026-01-05 00:56:35.986817 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986822 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986826 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986831 | orchestrator | 2026-01-05 00:56:35.986836 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-05 00:56:35.986841 | orchestrator | Monday 05 January 2026 00:52:15 +0000 (0:00:00.682) 0:07:18.429 ******** 2026-01-05 00:56:35.986845 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.986850 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.986855 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.986860 | orchestrator | 2026-01-05 00:56:35.986864 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-05 00:56:35.986869 | orchestrator | Monday 05 January 2026 00:52:15 +0000 (0:00:00.387) 0:07:18.816 ******** 2026-01-05 00:56:35.986874 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 00:56:35.986879 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 00:56:35.986883 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 00:56:35.986888 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 00:56:35.986893 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 00:56:35.986899 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 00:56:35.986914 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 00:56:35.986921 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 00:56:35.986927 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 00:56:35.986934 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 00:56:35.986940 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 00:56:35.986947 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 00:56:35.986954 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 00:56:35.986964 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 00:56:35.986974 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 00:56:35.986983 | orchestrator | 2026-01-05 00:56:35.986989 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-05 00:56:35.986996 | orchestrator | Monday 05 January 2026 00:52:17 +0000 (0:00:02.279) 0:07:21.095 ******** 2026-01-05 00:56:35.987004 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987011 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.987018 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.987032 | orchestrator | 2026-01-05 00:56:35.987039 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-05 00:56:35.987046 | orchestrator | Monday 05 January 2026 00:52:18 +0000 (0:00:00.324) 0:07:21.420 ******** 2026-01-05 00:56:35.987053 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.987061 | orchestrator | 2026-01-05 00:56:35.987069 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-05 00:56:35.987076 | orchestrator | Monday 05 January 2026 00:52:18 +0000 (0:00:00.529) 0:07:21.949 ******** 2026-01-05 00:56:35.987085 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 00:56:35.987093 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 00:56:35.987102 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 00:56:35.987107 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-05 00:56:35.987112 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-05 00:56:35.987117 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-05 00:56:35.987122 | orchestrator | 2026-01-05 00:56:35.987126 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-05 00:56:35.987131 | orchestrator | Monday 05 January 2026 00:52:19 +0000 (0:00:01.183) 0:07:23.133 ******** 2026-01-05 00:56:35.987136 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.987141 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:56:35.987145 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:56:35.987150 | orchestrator | 2026-01-05 00:56:35.987160 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:56:35.987165 | orchestrator | Monday 05 January 2026 00:52:21 +0000 (0:00:02.052) 0:07:25.185 ******** 2026-01-05 00:56:35.987170 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:56:35.987175 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:56:35.987179 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.987184 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:56:35.987189 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 00:56:35.987193 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.987198 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:56:35.987203 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 00:56:35.987208 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.987212 | orchestrator | 2026-01-05 00:56:35.987217 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-05 00:56:35.987222 | orchestrator | Monday 05 January 2026 00:52:23 +0000 (0:00:01.172) 0:07:26.358 ******** 2026-01-05 00:56:35.987227 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.987231 | orchestrator | 2026-01-05 00:56:35.987236 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-05 00:56:35.987241 | orchestrator | Monday 05 January 2026 00:52:25 +0000 (0:00:02.354) 0:07:28.713 ******** 2026-01-05 00:56:35.987246 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.987251 | orchestrator | 2026-01-05 00:56:35.987255 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-05 00:56:35.987260 | orchestrator | Monday 05 January 2026 00:52:25 +0000 (0:00:00.600) 0:07:29.313 ******** 2026-01-05 00:56:35.987265 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-846bb30c-958c-57a2-8682-0625433ec757', 'data_vg': 'ceph-846bb30c-958c-57a2-8682-0625433ec757'}) 2026-01-05 00:56:35.987273 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f6123202-7d2d-5b15-b15a-b013203adbfc', 'data_vg': 'ceph-f6123202-7d2d-5b15-b15a-b013203adbfc'}) 2026-01-05 00:56:35.987278 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8c427200-cd92-5345-a12e-93ab1a68a0a9', 'data_vg': 'ceph-8c427200-cd92-5345-a12e-93ab1a68a0a9'}) 2026-01-05 00:56:35.987291 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-be99b097-8f9c-5b18-b9e6-1dc57f49383d', 'data_vg': 'ceph-be99b097-8f9c-5b18-b9e6-1dc57f49383d'}) 2026-01-05 00:56:35.987296 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21', 'data_vg': 'ceph-6549b2e5-b8c2-5b01-a1b7-5ee8ee491b21'}) 2026-01-05 00:56:35.987301 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f0a3b48c-8251-5295-95c4-04cb80bcb769', 'data_vg': 'ceph-f0a3b48c-8251-5295-95c4-04cb80bcb769'}) 2026-01-05 00:56:35.987306 | orchestrator | 2026-01-05 00:56:35.987311 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-05 00:56:35.987315 | orchestrator | Monday 05 January 2026 00:53:07 +0000 (0:00:41.704) 0:08:11.018 ******** 2026-01-05 00:56:35.987320 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987325 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.987330 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.987334 | orchestrator | 2026-01-05 00:56:35.987339 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-05 00:56:35.987344 | orchestrator | Monday 05 January 2026 00:53:08 +0000 (0:00:00.381) 0:08:11.399 ******** 2026-01-05 00:56:35.987349 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.987354 | orchestrator | 2026-01-05 00:56:35.987358 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-05 00:56:35.987381 | orchestrator | Monday 05 January 2026 00:53:08 +0000 (0:00:00.632) 0:08:12.031 ******** 2026-01-05 00:56:35.987389 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.987396 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.987403 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.987411 | orchestrator | 2026-01-05 00:56:35.987416 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-05 00:56:35.987420 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:00.999) 0:08:13.031 ******** 2026-01-05 00:56:35.987425 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.987430 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.987434 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.987439 | orchestrator | 2026-01-05 00:56:35.987445 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-05 00:56:35.987453 | orchestrator | Monday 05 January 2026 00:53:12 +0000 (0:00:02.821) 0:08:15.852 ******** 2026-01-05 00:56:35.987461 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.987469 | orchestrator | 2026-01-05 00:56:35.987477 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-05 00:56:35.987485 | orchestrator | Monday 05 January 2026 00:53:13 +0000 (0:00:00.638) 0:08:16.491 ******** 2026-01-05 00:56:35.987493 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.987498 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.987503 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.987508 | orchestrator | 2026-01-05 00:56:35.987513 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-05 00:56:35.987517 | orchestrator | Monday 05 January 2026 00:53:14 +0000 (0:00:01.619) 0:08:18.111 ******** 2026-01-05 00:56:35.987522 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.987527 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.987535 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.987540 | orchestrator | 2026-01-05 00:56:35.987545 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-05 00:56:35.987550 | orchestrator | Monday 05 January 2026 00:53:15 +0000 (0:00:01.187) 0:08:19.299 ******** 2026-01-05 00:56:35.987555 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.987559 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.987571 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.987579 | orchestrator | 2026-01-05 00:56:35.987586 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-05 00:56:35.987595 | orchestrator | Monday 05 January 2026 00:53:17 +0000 (0:00:01.841) 0:08:21.140 ******** 2026-01-05 00:56:35.987603 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987610 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.987618 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.987625 | orchestrator | 2026-01-05 00:56:35.987634 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-05 00:56:35.987641 | orchestrator | Monday 05 January 2026 00:53:18 +0000 (0:00:00.308) 0:08:21.449 ******** 2026-01-05 00:56:35.987650 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987655 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.987660 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.987665 | orchestrator | 2026-01-05 00:56:35.987670 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-05 00:56:35.987674 | orchestrator | Monday 05 January 2026 00:53:18 +0000 (0:00:00.606) 0:08:22.055 ******** 2026-01-05 00:56:35.987679 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 00:56:35.987684 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-01-05 00:56:35.987689 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-05 00:56:35.987694 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-01-05 00:56:35.987698 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-05 00:56:35.987703 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-01-05 00:56:35.987708 | orchestrator | 2026-01-05 00:56:35.987713 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-05 00:56:35.987718 | orchestrator | Monday 05 January 2026 00:53:19 +0000 (0:00:00.987) 0:08:23.042 ******** 2026-01-05 00:56:35.987723 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-05 00:56:35.987727 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-05 00:56:35.987732 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-05 00:56:35.987737 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-05 00:56:35.987741 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-05 00:56:35.987746 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-05 00:56:35.987751 | orchestrator | 2026-01-05 00:56:35.987760 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-05 00:56:35.987765 | orchestrator | Monday 05 January 2026 00:53:21 +0000 (0:00:02.116) 0:08:25.159 ******** 2026-01-05 00:56:35.987770 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-05 00:56:35.987774 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-05 00:56:35.987779 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-05 00:56:35.987784 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-05 00:56:35.987789 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-05 00:56:35.987793 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-05 00:56:35.987798 | orchestrator | 2026-01-05 00:56:35.987803 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-05 00:56:35.987808 | orchestrator | Monday 05 January 2026 00:53:25 +0000 (0:00:03.784) 0:08:28.944 ******** 2026-01-05 00:56:35.987812 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987817 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.987822 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.987827 | orchestrator | 2026-01-05 00:56:35.987831 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-05 00:56:35.987836 | orchestrator | Monday 05 January 2026 00:53:28 +0000 (0:00:02.702) 0:08:31.647 ******** 2026-01-05 00:56:35.987841 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987845 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.987850 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-05 00:56:35.987862 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.987867 | orchestrator | 2026-01-05 00:56:35.987872 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-05 00:56:35.987876 | orchestrator | Monday 05 January 2026 00:53:40 +0000 (0:00:12.477) 0:08:44.125 ******** 2026-01-05 00:56:35.987881 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987886 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.987891 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.987896 | orchestrator | 2026-01-05 00:56:35.987900 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:56:35.987905 | orchestrator | Monday 05 January 2026 00:53:41 +0000 (0:00:01.003) 0:08:45.128 ******** 2026-01-05 00:56:35.987910 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987915 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.987919 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.987924 | orchestrator | 2026-01-05 00:56:35.987929 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-05 00:56:35.987933 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:00.374) 0:08:45.503 ******** 2026-01-05 00:56:35.987938 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.987943 | orchestrator | 2026-01-05 00:56:35.987948 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-05 00:56:35.987953 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:00.504) 0:08:46.007 ******** 2026-01-05 00:56:35.987957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.987962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.987970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.987975 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987980 | orchestrator | 2026-01-05 00:56:35.987985 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-05 00:56:35.987990 | orchestrator | Monday 05 January 2026 00:53:43 +0000 (0:00:00.789) 0:08:46.797 ******** 2026-01-05 00:56:35.987994 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.987999 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.988004 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.988009 | orchestrator | 2026-01-05 00:56:35.988013 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-05 00:56:35.988018 | orchestrator | Monday 05 January 2026 00:53:43 +0000 (0:00:00.264) 0:08:47.061 ******** 2026-01-05 00:56:35.988023 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988028 | orchestrator | 2026-01-05 00:56:35.988032 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-05 00:56:35.988037 | orchestrator | Monday 05 January 2026 00:53:43 +0000 (0:00:00.213) 0:08:47.275 ******** 2026-01-05 00:56:35.988042 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988047 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.988052 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.988056 | orchestrator | 2026-01-05 00:56:35.988061 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-05 00:56:35.988066 | orchestrator | Monday 05 January 2026 00:53:44 +0000 (0:00:00.286) 0:08:47.561 ******** 2026-01-05 00:56:35.988071 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988075 | orchestrator | 2026-01-05 00:56:35.988080 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-05 00:56:35.988085 | orchestrator | Monday 05 January 2026 00:53:44 +0000 (0:00:00.226) 0:08:47.788 ******** 2026-01-05 00:56:35.988090 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988094 | orchestrator | 2026-01-05 00:56:35.988099 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-05 00:56:35.988104 | orchestrator | Monday 05 January 2026 00:53:44 +0000 (0:00:00.223) 0:08:48.012 ******** 2026-01-05 00:56:35.988112 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988117 | orchestrator | 2026-01-05 00:56:35.988122 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-05 00:56:35.988127 | orchestrator | Monday 05 January 2026 00:53:44 +0000 (0:00:00.126) 0:08:48.139 ******** 2026-01-05 00:56:35.988131 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988136 | orchestrator | 2026-01-05 00:56:35.988141 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-05 00:56:35.988146 | orchestrator | Monday 05 January 2026 00:53:45 +0000 (0:00:00.203) 0:08:48.342 ******** 2026-01-05 00:56:35.988154 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988158 | orchestrator | 2026-01-05 00:56:35.988163 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-05 00:56:35.988168 | orchestrator | Monday 05 January 2026 00:53:45 +0000 (0:00:00.200) 0:08:48.543 ******** 2026-01-05 00:56:35.988173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.988178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.988182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.988187 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988192 | orchestrator | 2026-01-05 00:56:35.988197 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-05 00:56:35.988202 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:00.808) 0:08:49.352 ******** 2026-01-05 00:56:35.988206 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988211 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.988216 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.988220 | orchestrator | 2026-01-05 00:56:35.988225 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-05 00:56:35.988230 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:00.315) 0:08:49.667 ******** 2026-01-05 00:56:35.988235 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988240 | orchestrator | 2026-01-05 00:56:35.988244 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-05 00:56:35.988249 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:00.224) 0:08:49.891 ******** 2026-01-05 00:56:35.988254 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988259 | orchestrator | 2026-01-05 00:56:35.988265 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-05 00:56:35.988272 | orchestrator | 2026-01-05 00:56:35.988280 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:56:35.988288 | orchestrator | Monday 05 January 2026 00:53:47 +0000 (0:00:00.656) 0:08:50.548 ******** 2026-01-05 00:56:35.988296 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.988307 | orchestrator | 2026-01-05 00:56:35.988313 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:56:35.988317 | orchestrator | Monday 05 January 2026 00:53:48 +0000 (0:00:01.325) 0:08:51.873 ******** 2026-01-05 00:56:35.988323 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-0, testbed-node-1, testbed-node-5, testbed-node-2 2026-01-05 00:56:35.988328 | orchestrator | 2026-01-05 00:56:35.988332 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:56:35.988337 | orchestrator | Monday 05 January 2026 00:53:49 +0000 (0:00:01.416) 0:08:53.290 ******** 2026-01-05 00:56:35.988342 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.988347 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988351 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.988358 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.988406 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.988416 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.988431 | orchestrator | 2026-01-05 00:56:35.988444 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:56:35.988450 | orchestrator | Monday 05 January 2026 00:53:51 +0000 (0:00:01.420) 0:08:54.711 ******** 2026-01-05 00:56:35.988455 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.988459 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.988465 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.988473 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.988481 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.988489 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.988496 | orchestrator | 2026-01-05 00:56:35.988504 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:56:35.988512 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:00.731) 0:08:55.442 ******** 2026-01-05 00:56:35.988519 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.988527 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.988535 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.988540 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.988544 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.988549 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.988553 | orchestrator | 2026-01-05 00:56:35.988558 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:56:35.988563 | orchestrator | Monday 05 January 2026 00:53:53 +0000 (0:00:01.004) 0:08:56.447 ******** 2026-01-05 00:56:35.988572 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.988577 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.988581 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.988585 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.988590 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.988594 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.988599 | orchestrator | 2026-01-05 00:56:35.988603 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:56:35.988610 | orchestrator | Monday 05 January 2026 00:53:53 +0000 (0:00:00.755) 0:08:57.202 ******** 2026-01-05 00:56:35.988617 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988622 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.988629 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.988636 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.988645 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.988652 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.988660 | orchestrator | 2026-01-05 00:56:35.988668 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:56:35.988676 | orchestrator | Monday 05 January 2026 00:53:55 +0000 (0:00:01.270) 0:08:58.473 ******** 2026-01-05 00:56:35.988681 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988686 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.988690 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.988695 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.988699 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.988707 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.988712 | orchestrator | 2026-01-05 00:56:35.988717 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:56:35.988721 | orchestrator | Monday 05 January 2026 00:53:55 +0000 (0:00:00.655) 0:08:59.128 ******** 2026-01-05 00:56:35.988729 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988736 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.988744 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.988752 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.988759 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.988767 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.988774 | orchestrator | 2026-01-05 00:56:35.988781 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:56:35.988789 | orchestrator | Monday 05 January 2026 00:53:56 +0000 (0:00:00.950) 0:09:00.079 ******** 2026-01-05 00:56:35.988803 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.988809 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.988813 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.988818 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.988822 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.988827 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.988831 | orchestrator | 2026-01-05 00:56:35.988835 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:56:35.988840 | orchestrator | Monday 05 January 2026 00:53:57 +0000 (0:00:01.023) 0:09:01.103 ******** 2026-01-05 00:56:35.988845 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.988849 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.988853 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.988858 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.988862 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.988867 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.988871 | orchestrator | 2026-01-05 00:56:35.988876 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:56:35.988880 | orchestrator | Monday 05 January 2026 00:53:59 +0000 (0:00:01.494) 0:09:02.597 ******** 2026-01-05 00:56:35.988885 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988889 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.988896 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.988903 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.988910 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.988917 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.988925 | orchestrator | 2026-01-05 00:56:35.988932 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:56:35.988941 | orchestrator | Monday 05 January 2026 00:54:00 +0000 (0:00:00.746) 0:09:03.343 ******** 2026-01-05 00:56:35.988945 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.988950 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.988954 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.988959 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.988963 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.988968 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.988972 | orchestrator | 2026-01-05 00:56:35.988977 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:56:35.988981 | orchestrator | Monday 05 January 2026 00:54:00 +0000 (0:00:00.965) 0:09:04.308 ******** 2026-01-05 00:56:35.988988 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.988995 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989002 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989010 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.989017 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.989025 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.989031 | orchestrator | 2026-01-05 00:56:35.989039 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:56:35.989043 | orchestrator | Monday 05 January 2026 00:54:01 +0000 (0:00:00.688) 0:09:04.997 ******** 2026-01-05 00:56:35.989048 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.989052 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989057 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989061 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.989066 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.989070 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.989075 | orchestrator | 2026-01-05 00:56:35.989079 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:56:35.989084 | orchestrator | Monday 05 January 2026 00:54:02 +0000 (0:00:01.047) 0:09:06.044 ******** 2026-01-05 00:56:35.989088 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.989093 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989097 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989101 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.989106 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.989138 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.989143 | orchestrator | 2026-01-05 00:56:35.989148 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:56:35.989152 | orchestrator | Monday 05 January 2026 00:54:03 +0000 (0:00:00.771) 0:09:06.815 ******** 2026-01-05 00:56:35.989157 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.989161 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.989166 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.989170 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.989175 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.989179 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.989184 | orchestrator | 2026-01-05 00:56:35.989191 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:56:35.989198 | orchestrator | Monday 05 January 2026 00:54:04 +0000 (0:00:00.906) 0:09:07.722 ******** 2026-01-05 00:56:35.989206 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.989213 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.989221 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.989228 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:35.989234 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:35.989239 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:35.989243 | orchestrator | 2026-01-05 00:56:35.989248 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:56:35.989252 | orchestrator | Monday 05 January 2026 00:54:05 +0000 (0:00:00.620) 0:09:08.342 ******** 2026-01-05 00:56:35.989257 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.989263 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.989270 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.989282 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.989290 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.989297 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.989304 | orchestrator | 2026-01-05 00:56:35.989308 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:56:35.989313 | orchestrator | Monday 05 January 2026 00:54:05 +0000 (0:00:00.734) 0:09:09.077 ******** 2026-01-05 00:56:35.989317 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.989322 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989326 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989331 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.989335 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.989339 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.989344 | orchestrator | 2026-01-05 00:56:35.989348 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:56:35.989353 | orchestrator | Monday 05 January 2026 00:54:06 +0000 (0:00:00.574) 0:09:09.652 ******** 2026-01-05 00:56:35.989357 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.989375 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989383 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989391 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.989398 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.989406 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.989413 | orchestrator | 2026-01-05 00:56:35.989421 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-05 00:56:35.989428 | orchestrator | Monday 05 January 2026 00:54:07 +0000 (0:00:01.157) 0:09:10.809 ******** 2026-01-05 00:56:35.989436 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.989444 | orchestrator | 2026-01-05 00:56:35.989449 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-05 00:56:35.989453 | orchestrator | Monday 05 January 2026 00:54:11 +0000 (0:00:03.919) 0:09:14.729 ******** 2026-01-05 00:56:35.989458 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.989462 | orchestrator | 2026-01-05 00:56:35.989467 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-05 00:56:35.989483 | orchestrator | Monday 05 January 2026 00:54:13 +0000 (0:00:02.052) 0:09:16.782 ******** 2026-01-05 00:56:35.989490 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.989497 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.989505 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.989513 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.989520 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.989525 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.989530 | orchestrator | 2026-01-05 00:56:35.989534 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-05 00:56:35.989539 | orchestrator | Monday 05 January 2026 00:54:15 +0000 (0:00:01.710) 0:09:18.492 ******** 2026-01-05 00:56:35.989544 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.989548 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.989552 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.989557 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.989561 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.989566 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.989570 | orchestrator | 2026-01-05 00:56:35.989575 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-05 00:56:35.989579 | orchestrator | Monday 05 January 2026 00:54:16 +0000 (0:00:00.883) 0:09:19.376 ******** 2026-01-05 00:56:35.989588 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.989594 | orchestrator | 2026-01-05 00:56:35.989598 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-05 00:56:35.989603 | orchestrator | Monday 05 January 2026 00:54:17 +0000 (0:00:01.152) 0:09:20.528 ******** 2026-01-05 00:56:35.989607 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.989612 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.989616 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.989621 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.989625 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.989629 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.989634 | orchestrator | 2026-01-05 00:56:35.989638 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-05 00:56:35.989643 | orchestrator | Monday 05 January 2026 00:54:19 +0000 (0:00:02.204) 0:09:22.733 ******** 2026-01-05 00:56:35.989647 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.989652 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.989656 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.989661 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.989665 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.989670 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.989674 | orchestrator | 2026-01-05 00:56:35.989679 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-05 00:56:35.989683 | orchestrator | Monday 05 January 2026 00:54:23 +0000 (0:00:03.778) 0:09:26.512 ******** 2026-01-05 00:56:35.989688 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:35.989692 | orchestrator | 2026-01-05 00:56:35.989697 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-05 00:56:35.989701 | orchestrator | Monday 05 January 2026 00:54:24 +0000 (0:00:01.367) 0:09:27.879 ******** 2026-01-05 00:56:35.989706 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.989710 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989715 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989719 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.989724 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.989728 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.989732 | orchestrator | 2026-01-05 00:56:35.989737 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-05 00:56:35.989745 | orchestrator | Monday 05 January 2026 00:54:25 +0000 (0:00:00.879) 0:09:28.758 ******** 2026-01-05 00:56:35.989750 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.989755 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.989759 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.989767 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:35.989772 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:35.989776 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:35.989781 | orchestrator | 2026-01-05 00:56:35.989785 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-05 00:56:35.989790 | orchestrator | Monday 05 January 2026 00:54:27 +0000 (0:00:02.211) 0:09:30.970 ******** 2026-01-05 00:56:35.989794 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.989799 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989803 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989808 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:35.989812 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:35.989817 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:35.989821 | orchestrator | 2026-01-05 00:56:35.989826 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-05 00:56:35.989830 | orchestrator | 2026-01-05 00:56:35.989835 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:56:35.989839 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:00.977) 0:09:31.947 ******** 2026-01-05 00:56:35.989844 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.989848 | orchestrator | 2026-01-05 00:56:35.989853 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:56:35.989857 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:00.461) 0:09:32.408 ******** 2026-01-05 00:56:35.989862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.989867 | orchestrator | 2026-01-05 00:56:35.989871 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:56:35.989875 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:00.831) 0:09:33.240 ******** 2026-01-05 00:56:35.989880 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.989884 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.989889 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.989893 | orchestrator | 2026-01-05 00:56:35.989898 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:56:35.989902 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:00.363) 0:09:33.603 ******** 2026-01-05 00:56:35.989907 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.989911 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989916 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989920 | orchestrator | 2026-01-05 00:56:35.989925 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:56:35.989929 | orchestrator | Monday 05 January 2026 00:54:31 +0000 (0:00:00.734) 0:09:34.338 ******** 2026-01-05 00:56:35.989934 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.989938 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989943 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989947 | orchestrator | 2026-01-05 00:56:35.989952 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:56:35.989956 | orchestrator | Monday 05 January 2026 00:54:32 +0000 (0:00:01.052) 0:09:35.390 ******** 2026-01-05 00:56:35.989960 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.989965 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.989969 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.989974 | orchestrator | 2026-01-05 00:56:35.989978 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:56:35.989986 | orchestrator | Monday 05 January 2026 00:54:32 +0000 (0:00:00.725) 0:09:36.116 ******** 2026-01-05 00:56:35.989990 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990045 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990051 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990056 | orchestrator | 2026-01-05 00:56:35.990060 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:56:35.990065 | orchestrator | Monday 05 January 2026 00:54:33 +0000 (0:00:00.321) 0:09:36.437 ******** 2026-01-05 00:56:35.990069 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990074 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990078 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990083 | orchestrator | 2026-01-05 00:56:35.990087 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:56:35.990092 | orchestrator | Monday 05 January 2026 00:54:33 +0000 (0:00:00.287) 0:09:36.725 ******** 2026-01-05 00:56:35.990097 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990101 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990105 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990110 | orchestrator | 2026-01-05 00:56:35.990115 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:56:35.990119 | orchestrator | Monday 05 January 2026 00:54:33 +0000 (0:00:00.606) 0:09:37.331 ******** 2026-01-05 00:56:35.990124 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.990128 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.990133 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.990137 | orchestrator | 2026-01-05 00:56:35.990142 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:56:35.990146 | orchestrator | Monday 05 January 2026 00:54:34 +0000 (0:00:00.773) 0:09:38.105 ******** 2026-01-05 00:56:35.990151 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.990155 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.990160 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.990164 | orchestrator | 2026-01-05 00:56:35.990169 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:56:35.990173 | orchestrator | Monday 05 January 2026 00:54:35 +0000 (0:00:00.731) 0:09:38.837 ******** 2026-01-05 00:56:35.990178 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990182 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990187 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990191 | orchestrator | 2026-01-05 00:56:35.990196 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:56:35.990200 | orchestrator | Monday 05 January 2026 00:54:35 +0000 (0:00:00.330) 0:09:39.167 ******** 2026-01-05 00:56:35.990205 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990209 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990214 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990218 | orchestrator | 2026-01-05 00:56:35.990226 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:56:35.990231 | orchestrator | Monday 05 January 2026 00:54:36 +0000 (0:00:00.641) 0:09:39.809 ******** 2026-01-05 00:56:35.990235 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.990240 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.990244 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.990249 | orchestrator | 2026-01-05 00:56:35.990253 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:56:35.990258 | orchestrator | Monday 05 January 2026 00:54:36 +0000 (0:00:00.340) 0:09:40.150 ******** 2026-01-05 00:56:35.990262 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.990267 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.990271 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.990276 | orchestrator | 2026-01-05 00:56:35.990280 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:56:35.990285 | orchestrator | Monday 05 January 2026 00:54:37 +0000 (0:00:00.365) 0:09:40.515 ******** 2026-01-05 00:56:35.990290 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.990298 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.990309 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.990314 | orchestrator | 2026-01-05 00:56:35.990318 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:56:35.990323 | orchestrator | Monday 05 January 2026 00:54:37 +0000 (0:00:00.340) 0:09:40.856 ******** 2026-01-05 00:56:35.990327 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990332 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990336 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990341 | orchestrator | 2026-01-05 00:56:35.990345 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:56:35.990350 | orchestrator | Monday 05 January 2026 00:54:38 +0000 (0:00:00.613) 0:09:41.470 ******** 2026-01-05 00:56:35.990354 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990358 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990378 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990384 | orchestrator | 2026-01-05 00:56:35.990388 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:56:35.990393 | orchestrator | Monday 05 January 2026 00:54:38 +0000 (0:00:00.416) 0:09:41.886 ******** 2026-01-05 00:56:35.990397 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990402 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990406 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990411 | orchestrator | 2026-01-05 00:56:35.990418 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:56:35.990425 | orchestrator | Monday 05 January 2026 00:54:38 +0000 (0:00:00.309) 0:09:42.196 ******** 2026-01-05 00:56:35.990436 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.990446 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.990453 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.990459 | orchestrator | 2026-01-05 00:56:35.990466 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:56:35.990473 | orchestrator | Monday 05 January 2026 00:54:39 +0000 (0:00:00.359) 0:09:42.555 ******** 2026-01-05 00:56:35.990479 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.990487 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.990494 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.990502 | orchestrator | 2026-01-05 00:56:35.990509 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-05 00:56:35.990515 | orchestrator | Monday 05 January 2026 00:54:40 +0000 (0:00:00.887) 0:09:43.443 ******** 2026-01-05 00:56:35.990522 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990533 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990541 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-05 00:56:35.990548 | orchestrator | 2026-01-05 00:56:35.990556 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-05 00:56:35.990564 | orchestrator | Monday 05 January 2026 00:54:40 +0000 (0:00:00.426) 0:09:43.869 ******** 2026-01-05 00:56:35.990571 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.990578 | orchestrator | 2026-01-05 00:56:35.990585 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-05 00:56:35.990593 | orchestrator | Monday 05 January 2026 00:54:42 +0000 (0:00:02.061) 0:09:45.930 ******** 2026-01-05 00:56:35.990604 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-05 00:56:35.990615 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990620 | orchestrator | 2026-01-05 00:56:35.990624 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-05 00:56:35.990629 | orchestrator | Monday 05 January 2026 00:54:42 +0000 (0:00:00.208) 0:09:46.139 ******** 2026-01-05 00:56:35.990635 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:56:35.990655 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:56:35.990660 | orchestrator | 2026-01-05 00:56:35.990665 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-05 00:56:35.990669 | orchestrator | Monday 05 January 2026 00:54:51 +0000 (0:00:08.447) 0:09:54.586 ******** 2026-01-05 00:56:35.990674 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 00:56:35.990678 | orchestrator | 2026-01-05 00:56:35.990688 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-05 00:56:35.990692 | orchestrator | Monday 05 January 2026 00:54:54 +0000 (0:00:03.628) 0:09:58.215 ******** 2026-01-05 00:56:35.990697 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.990701 | orchestrator | 2026-01-05 00:56:35.990706 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-05 00:56:35.990711 | orchestrator | Monday 05 January 2026 00:54:55 +0000 (0:00:00.603) 0:09:58.818 ******** 2026-01-05 00:56:35.990715 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 00:56:35.990720 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 00:56:35.990724 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 00:56:35.990729 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-05 00:56:35.990733 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-05 00:56:35.990737 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-05 00:56:35.990742 | orchestrator | 2026-01-05 00:56:35.990747 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-05 00:56:35.990751 | orchestrator | Monday 05 January 2026 00:54:56 +0000 (0:00:01.075) 0:09:59.893 ******** 2026-01-05 00:56:35.990756 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.990760 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:56:35.990765 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:56:35.990770 | orchestrator | 2026-01-05 00:56:35.990774 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:56:35.990779 | orchestrator | Monday 05 January 2026 00:54:58 +0000 (0:00:02.385) 0:10:02.279 ******** 2026-01-05 00:56:35.990783 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:56:35.990788 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:56:35.990792 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.990797 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:56:35.990801 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 00:56:35.990806 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.990811 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:56:35.990815 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 00:56:35.990819 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.990824 | orchestrator | 2026-01-05 00:56:35.990828 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-05 00:56:35.990833 | orchestrator | Monday 05 January 2026 00:55:00 +0000 (0:00:01.750) 0:10:04.029 ******** 2026-01-05 00:56:35.990837 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.990842 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.990846 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.990851 | orchestrator | 2026-01-05 00:56:35.990855 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-05 00:56:35.990863 | orchestrator | Monday 05 January 2026 00:55:03 +0000 (0:00:02.990) 0:10:07.019 ******** 2026-01-05 00:56:35.990868 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.990876 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.990880 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.990885 | orchestrator | 2026-01-05 00:56:35.990889 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-05 00:56:35.990894 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:00.400) 0:10:07.420 ******** 2026-01-05 00:56:35.990898 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.990903 | orchestrator | 2026-01-05 00:56:35.990908 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-05 00:56:35.990912 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:00.857) 0:10:08.277 ******** 2026-01-05 00:56:35.990917 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.990921 | orchestrator | 2026-01-05 00:56:35.990926 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-05 00:56:35.990930 | orchestrator | Monday 05 January 2026 00:55:05 +0000 (0:00:00.618) 0:10:08.897 ******** 2026-01-05 00:56:35.990935 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.990939 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.990944 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.990948 | orchestrator | 2026-01-05 00:56:35.990953 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-05 00:56:35.990957 | orchestrator | Monday 05 January 2026 00:55:06 +0000 (0:00:01.429) 0:10:10.326 ******** 2026-01-05 00:56:35.990962 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.990966 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.990971 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.990975 | orchestrator | 2026-01-05 00:56:35.990980 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-05 00:56:35.990984 | orchestrator | Monday 05 January 2026 00:55:08 +0000 (0:00:01.802) 0:10:12.128 ******** 2026-01-05 00:56:35.990989 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.990993 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.991000 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.991007 | orchestrator | 2026-01-05 00:56:35.991015 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-05 00:56:35.991021 | orchestrator | Monday 05 January 2026 00:55:10 +0000 (0:00:02.048) 0:10:14.177 ******** 2026-01-05 00:56:35.991028 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.991035 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.991043 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.991049 | orchestrator | 2026-01-05 00:56:35.991060 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-05 00:56:35.991066 | orchestrator | Monday 05 January 2026 00:55:13 +0000 (0:00:02.370) 0:10:16.548 ******** 2026-01-05 00:56:35.991073 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991079 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991086 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991093 | orchestrator | 2026-01-05 00:56:35.991100 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:56:35.991108 | orchestrator | Monday 05 January 2026 00:55:14 +0000 (0:00:01.569) 0:10:18.117 ******** 2026-01-05 00:56:35.991116 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.991123 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.991130 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.991135 | orchestrator | 2026-01-05 00:56:35.991140 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-05 00:56:35.991144 | orchestrator | Monday 05 January 2026 00:55:15 +0000 (0:00:00.692) 0:10:18.810 ******** 2026-01-05 00:56:35.991153 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.991158 | orchestrator | 2026-01-05 00:56:35.991162 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-05 00:56:35.991167 | orchestrator | Monday 05 January 2026 00:55:16 +0000 (0:00:01.054) 0:10:19.864 ******** 2026-01-05 00:56:35.991171 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991176 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991180 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991185 | orchestrator | 2026-01-05 00:56:35.991189 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-05 00:56:35.991194 | orchestrator | Monday 05 January 2026 00:55:16 +0000 (0:00:00.389) 0:10:20.254 ******** 2026-01-05 00:56:35.991198 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.991203 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.991207 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.991212 | orchestrator | 2026-01-05 00:56:35.991216 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-05 00:56:35.991221 | orchestrator | Monday 05 January 2026 00:55:18 +0000 (0:00:01.355) 0:10:21.609 ******** 2026-01-05 00:56:35.991225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.991230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.991234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.991239 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.991243 | orchestrator | 2026-01-05 00:56:35.991248 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-05 00:56:35.991252 | orchestrator | Monday 05 January 2026 00:55:19 +0000 (0:00:00.945) 0:10:22.555 ******** 2026-01-05 00:56:35.991257 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991261 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991266 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991270 | orchestrator | 2026-01-05 00:56:35.991274 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-05 00:56:35.991279 | orchestrator | 2026-01-05 00:56:35.991284 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 00:56:35.991288 | orchestrator | Monday 05 January 2026 00:55:20 +0000 (0:00:00.904) 0:10:23.460 ******** 2026-01-05 00:56:35.991296 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.991301 | orchestrator | 2026-01-05 00:56:35.991305 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 00:56:35.991310 | orchestrator | Monday 05 January 2026 00:55:20 +0000 (0:00:00.561) 0:10:24.022 ******** 2026-01-05 00:56:35.991314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.991319 | orchestrator | 2026-01-05 00:56:35.991323 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 00:56:35.991328 | orchestrator | Monday 05 January 2026 00:55:21 +0000 (0:00:00.768) 0:10:24.791 ******** 2026-01-05 00:56:35.991333 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.991337 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.991342 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.991346 | orchestrator | 2026-01-05 00:56:35.991351 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 00:56:35.991355 | orchestrator | Monday 05 January 2026 00:55:21 +0000 (0:00:00.321) 0:10:25.113 ******** 2026-01-05 00:56:35.991359 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991414 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991423 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991430 | orchestrator | 2026-01-05 00:56:35.991437 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 00:56:35.991445 | orchestrator | Monday 05 January 2026 00:55:22 +0000 (0:00:00.710) 0:10:25.823 ******** 2026-01-05 00:56:35.991459 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991466 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991473 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991480 | orchestrator | 2026-01-05 00:56:35.991487 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 00:56:35.991493 | orchestrator | Monday 05 January 2026 00:55:23 +0000 (0:00:00.711) 0:10:26.535 ******** 2026-01-05 00:56:35.991500 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991507 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991514 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991521 | orchestrator | 2026-01-05 00:56:35.991527 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 00:56:35.991535 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:01.097) 0:10:27.633 ******** 2026-01-05 00:56:35.991541 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.991548 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.991554 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.991560 | orchestrator | 2026-01-05 00:56:35.991567 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 00:56:35.991579 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:00.320) 0:10:27.953 ******** 2026-01-05 00:56:35.991586 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.991592 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.991599 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.991606 | orchestrator | 2026-01-05 00:56:35.991613 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 00:56:35.991620 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:00.298) 0:10:28.251 ******** 2026-01-05 00:56:35.991626 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.991633 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.991639 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.991648 | orchestrator | 2026-01-05 00:56:35.991654 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 00:56:35.991660 | orchestrator | Monday 05 January 2026 00:55:25 +0000 (0:00:00.289) 0:10:28.541 ******** 2026-01-05 00:56:35.991666 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991673 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991679 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991687 | orchestrator | 2026-01-05 00:56:35.991693 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 00:56:35.991700 | orchestrator | Monday 05 January 2026 00:55:26 +0000 (0:00:01.164) 0:10:29.705 ******** 2026-01-05 00:56:35.991707 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991715 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991722 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991728 | orchestrator | 2026-01-05 00:56:35.991738 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 00:56:35.991745 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:00.751) 0:10:30.456 ******** 2026-01-05 00:56:35.991751 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.991758 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.991766 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.991773 | orchestrator | 2026-01-05 00:56:35.991780 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 00:56:35.991786 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:00.347) 0:10:30.804 ******** 2026-01-05 00:56:35.991793 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.991800 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.991806 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.991812 | orchestrator | 2026-01-05 00:56:35.991819 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 00:56:35.991826 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:00.333) 0:10:31.138 ******** 2026-01-05 00:56:35.991833 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991848 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991852 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991856 | orchestrator | 2026-01-05 00:56:35.991860 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 00:56:35.991864 | orchestrator | Monday 05 January 2026 00:55:28 +0000 (0:00:00.698) 0:10:31.836 ******** 2026-01-05 00:56:35.991868 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991872 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991876 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991880 | orchestrator | 2026-01-05 00:56:35.991884 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 00:56:35.991889 | orchestrator | Monday 05 January 2026 00:55:28 +0000 (0:00:00.426) 0:10:32.262 ******** 2026-01-05 00:56:35.991893 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.991897 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.991901 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.991905 | orchestrator | 2026-01-05 00:56:35.991915 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 00:56:35.991922 | orchestrator | Monday 05 January 2026 00:55:29 +0000 (0:00:00.348) 0:10:32.611 ******** 2026-01-05 00:56:35.991928 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.991934 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.991940 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.991947 | orchestrator | 2026-01-05 00:56:35.991953 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 00:56:35.991960 | orchestrator | Monday 05 January 2026 00:55:29 +0000 (0:00:00.337) 0:10:32.949 ******** 2026-01-05 00:56:35.991966 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.991973 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.991980 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.991987 | orchestrator | 2026-01-05 00:56:35.991993 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 00:56:35.992001 | orchestrator | Monday 05 January 2026 00:55:30 +0000 (0:00:00.612) 0:10:33.561 ******** 2026-01-05 00:56:35.992005 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.992009 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.992013 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.992017 | orchestrator | 2026-01-05 00:56:35.992021 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 00:56:35.992025 | orchestrator | Monday 05 January 2026 00:55:30 +0000 (0:00:00.336) 0:10:33.897 ******** 2026-01-05 00:56:35.992029 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.992034 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.992038 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.992042 | orchestrator | 2026-01-05 00:56:35.992046 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 00:56:35.992050 | orchestrator | Monday 05 January 2026 00:55:30 +0000 (0:00:00.341) 0:10:34.239 ******** 2026-01-05 00:56:35.992054 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.992058 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.992062 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.992066 | orchestrator | 2026-01-05 00:56:35.992070 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-05 00:56:35.992075 | orchestrator | Monday 05 January 2026 00:55:31 +0000 (0:00:00.794) 0:10:35.033 ******** 2026-01-05 00:56:35.992079 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.992083 | orchestrator | 2026-01-05 00:56:35.992087 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-05 00:56:35.992091 | orchestrator | Monday 05 January 2026 00:55:32 +0000 (0:00:00.568) 0:10:35.602 ******** 2026-01-05 00:56:35.992101 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.992105 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:56:35.992109 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:56:35.992122 | orchestrator | 2026-01-05 00:56:35.992126 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:56:35.992130 | orchestrator | Monday 05 January 2026 00:55:34 +0000 (0:00:02.033) 0:10:37.636 ******** 2026-01-05 00:56:35.992134 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:56:35.992139 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 00:56:35.992143 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.992148 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:56:35.992152 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 00:56:35.992156 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.992160 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:56:35.992164 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 00:56:35.992168 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.992172 | orchestrator | 2026-01-05 00:56:35.992177 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-05 00:56:35.992181 | orchestrator | Monday 05 January 2026 00:55:35 +0000 (0:00:01.484) 0:10:39.120 ******** 2026-01-05 00:56:35.992185 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.992189 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.992193 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.992197 | orchestrator | 2026-01-05 00:56:35.992201 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-05 00:56:35.992205 | orchestrator | Monday 05 January 2026 00:55:36 +0000 (0:00:00.324) 0:10:39.445 ******** 2026-01-05 00:56:35.992209 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.992214 | orchestrator | 2026-01-05 00:56:35.992218 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-05 00:56:35.992222 | orchestrator | Monday 05 January 2026 00:55:36 +0000 (0:00:00.552) 0:10:39.997 ******** 2026-01-05 00:56:35.992227 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.992232 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.992236 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.992241 | orchestrator | 2026-01-05 00:56:35.992245 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-05 00:56:35.992249 | orchestrator | Monday 05 January 2026 00:55:38 +0000 (0:00:01.432) 0:10:41.429 ******** 2026-01-05 00:56:35.992253 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.992260 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 00:56:35.992264 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.992268 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 00:56:35.992273 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.992277 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 00:56:35.992281 | orchestrator | 2026-01-05 00:56:35.992285 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-05 00:56:35.992289 | orchestrator | Monday 05 January 2026 00:55:42 +0000 (0:00:04.483) 0:10:45.913 ******** 2026-01-05 00:56:35.992293 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.992301 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:56:35.992305 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.992309 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:56:35.992313 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:56:35.992318 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:56:35.992322 | orchestrator | 2026-01-05 00:56:35.992326 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-05 00:56:35.992330 | orchestrator | Monday 05 January 2026 00:55:44 +0000 (0:00:02.293) 0:10:48.207 ******** 2026-01-05 00:56:35.992334 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:56:35.992338 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.992342 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:56:35.992347 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.992351 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:56:35.992355 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.992359 | orchestrator | 2026-01-05 00:56:35.992387 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-05 00:56:35.992392 | orchestrator | Monday 05 January 2026 00:55:46 +0000 (0:00:01.312) 0:10:49.520 ******** 2026-01-05 00:56:35.992401 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-05 00:56:35.992405 | orchestrator | 2026-01-05 00:56:35.992409 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-05 00:56:35.992413 | orchestrator | Monday 05 January 2026 00:55:46 +0000 (0:00:00.249) 0:10:49.770 ******** 2026-01-05 00:56:35.992417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992438 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.992442 | orchestrator | 2026-01-05 00:56:35.992446 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-05 00:56:35.992451 | orchestrator | Monday 05 January 2026 00:55:47 +0000 (0:00:01.163) 0:10:50.933 ******** 2026-01-05 00:56:35.992455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 00:56:35.992475 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.992479 | orchestrator | 2026-01-05 00:56:35.992483 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-05 00:56:35.992516 | orchestrator | Monday 05 January 2026 00:55:48 +0000 (0:00:00.627) 0:10:51.561 ******** 2026-01-05 00:56:35.992520 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:56:35.992524 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:56:35.992532 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:56:35.992536 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:56:35.992540 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 00:56:35.992545 | orchestrator | 2026-01-05 00:56:35.992549 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-05 00:56:35.992554 | orchestrator | Monday 05 January 2026 00:56:19 +0000 (0:00:31.548) 0:11:23.109 ******** 2026-01-05 00:56:35.992561 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.992568 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.992574 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.992581 | orchestrator | 2026-01-05 00:56:35.992588 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-05 00:56:35.992594 | orchestrator | Monday 05 January 2026 00:56:20 +0000 (0:00:00.296) 0:11:23.405 ******** 2026-01-05 00:56:35.992601 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.992608 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.992615 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.992622 | orchestrator | 2026-01-05 00:56:35.992628 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-05 00:56:35.992632 | orchestrator | Monday 05 January 2026 00:56:20 +0000 (0:00:00.296) 0:11:23.701 ******** 2026-01-05 00:56:35.992636 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.992640 | orchestrator | 2026-01-05 00:56:35.992645 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-05 00:56:35.992649 | orchestrator | Monday 05 January 2026 00:56:21 +0000 (0:00:00.700) 0:11:24.401 ******** 2026-01-05 00:56:35.992653 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.992657 | orchestrator | 2026-01-05 00:56:35.992661 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-05 00:56:35.992665 | orchestrator | Monday 05 January 2026 00:56:21 +0000 (0:00:00.503) 0:11:24.905 ******** 2026-01-05 00:56:35.992672 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.992676 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.992681 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.992685 | orchestrator | 2026-01-05 00:56:35.992689 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-05 00:56:35.992693 | orchestrator | Monday 05 January 2026 00:56:22 +0000 (0:00:01.407) 0:11:26.312 ******** 2026-01-05 00:56:35.992697 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.992701 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.992705 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.992709 | orchestrator | 2026-01-05 00:56:35.992713 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-05 00:56:35.992717 | orchestrator | Monday 05 January 2026 00:56:24 +0000 (0:00:01.520) 0:11:27.833 ******** 2026-01-05 00:56:35.992721 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:35.992726 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:35.992730 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:35.992740 | orchestrator | 2026-01-05 00:56:35.992744 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-05 00:56:35.992748 | orchestrator | Monday 05 January 2026 00:56:26 +0000 (0:00:01.946) 0:11:29.779 ******** 2026-01-05 00:56:35.992752 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.992756 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.992761 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 00:56:35.992766 | orchestrator | 2026-01-05 00:56:35.992773 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 00:56:35.992779 | orchestrator | Monday 05 January 2026 00:56:29 +0000 (0:00:02.945) 0:11:32.724 ******** 2026-01-05 00:56:35.992786 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.992793 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.992799 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.992806 | orchestrator | 2026-01-05 00:56:35.992812 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-05 00:56:35.992816 | orchestrator | Monday 05 January 2026 00:56:29 +0000 (0:00:00.386) 0:11:33.111 ******** 2026-01-05 00:56:35.992820 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:56:35.992824 | orchestrator | 2026-01-05 00:56:35.992828 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-05 00:56:35.992833 | orchestrator | Monday 05 January 2026 00:56:30 +0000 (0:00:00.552) 0:11:33.663 ******** 2026-01-05 00:56:35.992837 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.992841 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.992845 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.992849 | orchestrator | 2026-01-05 00:56:35.992853 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-05 00:56:35.992857 | orchestrator | Monday 05 January 2026 00:56:31 +0000 (0:00:00.670) 0:11:34.334 ******** 2026-01-05 00:56:35.992861 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.992865 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:56:35.992872 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:56:35.992876 | orchestrator | 2026-01-05 00:56:35.992881 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-05 00:56:35.992885 | orchestrator | Monday 05 January 2026 00:56:31 +0000 (0:00:00.467) 0:11:34.801 ******** 2026-01-05 00:56:35.992889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:56:35.992893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:56:35.992897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:56:35.992901 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:56:35.992905 | orchestrator | 2026-01-05 00:56:35.992909 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-05 00:56:35.992913 | orchestrator | Monday 05 January 2026 00:56:32 +0000 (0:00:00.677) 0:11:35.479 ******** 2026-01-05 00:56:35.992917 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:35.992921 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:35.992925 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:35.992929 | orchestrator | 2026-01-05 00:56:35.992933 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:56:35.992938 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-05 00:56:35.992943 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-05 00:56:35.992951 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-05 00:56:35.992956 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-05 00:56:35.992960 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-05 00:56:35.992964 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-05 00:56:35.992968 | orchestrator | 2026-01-05 00:56:35.992972 | orchestrator | 2026-01-05 00:56:35.992976 | orchestrator | 2026-01-05 00:56:35.992983 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:56:35.992987 | orchestrator | Monday 05 January 2026 00:56:32 +0000 (0:00:00.265) 0:11:35.745 ******** 2026-01-05 00:56:35.992991 | orchestrator | =============================================================================== 2026-01-05 00:56:35.992996 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 65.09s 2026-01-05 00:56:35.993000 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.70s 2026-01-05 00:56:35.993004 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.55s 2026-01-05 00:56:35.993008 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.13s 2026-01-05 00:56:35.993012 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.90s 2026-01-05 00:56:35.993016 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.36s 2026-01-05 00:56:35.993020 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.48s 2026-01-05 00:56:35.993024 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.70s 2026-01-05 00:56:35.993028 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.29s 2026-01-05 00:56:35.993032 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.45s 2026-01-05 00:56:35.993036 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.16s 2026-01-05 00:56:35.993040 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.12s 2026-01-05 00:56:35.993044 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 6.02s 2026-01-05 00:56:35.993048 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.17s 2026-01-05 00:56:35.993052 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.48s 2026-01-05 00:56:35.993056 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.92s 2026-01-05 00:56:35.993060 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.78s 2026-01-05 00:56:35.993069 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.78s 2026-01-05 00:56:35.993073 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.64s 2026-01-05 00:56:35.993077 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.63s 2026-01-05 00:56:35.993081 | orchestrator | 2026-01-05 00:56:35 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:56:35.993086 | orchestrator | 2026-01-05 00:56:35 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:35.993090 | orchestrator | 2026-01-05 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:39.030705 | orchestrator | 2026-01-05 00:56:39 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:39.032870 | orchestrator | 2026-01-05 00:56:39 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:56:39.035490 | orchestrator | 2026-01-05 00:56:39 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:39.035646 | orchestrator | 2026-01-05 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:42.086001 | orchestrator | 2026-01-05 00:56:42 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:42.088108 | orchestrator | 2026-01-05 00:56:42 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:56:42.090098 | orchestrator | 2026-01-05 00:56:42 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:42.090522 | orchestrator | 2026-01-05 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:45.143471 | orchestrator | 2026-01-05 00:56:45 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:45.146480 | orchestrator | 2026-01-05 00:56:45 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:56:45.148137 | orchestrator | 2026-01-05 00:56:45 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:45.148192 | orchestrator | 2026-01-05 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:48.190625 | orchestrator | 2026-01-05 00:56:48 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:48.191783 | orchestrator | 2026-01-05 00:56:48 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:56:48.192680 | orchestrator | 2026-01-05 00:56:48 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:48.192722 | orchestrator | 2026-01-05 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:51.241132 | orchestrator | 2026-01-05 00:56:51 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:51.244394 | orchestrator | 2026-01-05 00:56:51 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:56:51.245117 | orchestrator | 2026-01-05 00:56:51 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:51.245140 | orchestrator | 2026-01-05 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:54.298884 | orchestrator | 2026-01-05 00:56:54 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:54.301723 | orchestrator | 2026-01-05 00:56:54 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:56:54.304037 | orchestrator | 2026-01-05 00:56:54 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:54.304770 | orchestrator | 2026-01-05 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:57.353047 | orchestrator | 2026-01-05 00:56:57 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:56:57.355146 | orchestrator | 2026-01-05 00:56:57 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:56:57.356530 | orchestrator | 2026-01-05 00:56:57 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:56:57.356655 | orchestrator | 2026-01-05 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:00.406935 | orchestrator | 2026-01-05 00:57:00 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:00.407963 | orchestrator | 2026-01-05 00:57:00 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:00.409267 | orchestrator | 2026-01-05 00:57:00 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:00.409929 | orchestrator | 2026-01-05 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:03.459655 | orchestrator | 2026-01-05 00:57:03 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:03.464236 | orchestrator | 2026-01-05 00:57:03 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:03.468571 | orchestrator | 2026-01-05 00:57:03 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:03.468646 | orchestrator | 2026-01-05 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:06.514995 | orchestrator | 2026-01-05 00:57:06 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:06.516660 | orchestrator | 2026-01-05 00:57:06 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:06.518558 | orchestrator | 2026-01-05 00:57:06 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:06.518602 | orchestrator | 2026-01-05 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:09.580192 | orchestrator | 2026-01-05 00:57:09 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:09.583236 | orchestrator | 2026-01-05 00:57:09 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:09.584913 | orchestrator | 2026-01-05 00:57:09 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:09.584969 | orchestrator | 2026-01-05 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:12.639571 | orchestrator | 2026-01-05 00:57:12 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:12.642538 | orchestrator | 2026-01-05 00:57:12 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:12.644701 | orchestrator | 2026-01-05 00:57:12 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:12.644917 | orchestrator | 2026-01-05 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:15.703456 | orchestrator | 2026-01-05 00:57:15 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:15.704814 | orchestrator | 2026-01-05 00:57:15 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:15.707025 | orchestrator | 2026-01-05 00:57:15 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:15.707074 | orchestrator | 2026-01-05 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:18.760743 | orchestrator | 2026-01-05 00:57:18 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:18.764100 | orchestrator | 2026-01-05 00:57:18 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:18.767967 | orchestrator | 2026-01-05 00:57:18 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:18.768303 | orchestrator | 2026-01-05 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:21.820924 | orchestrator | 2026-01-05 00:57:21 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:21.823656 | orchestrator | 2026-01-05 00:57:21 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:21.826416 | orchestrator | 2026-01-05 00:57:21 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:21.826475 | orchestrator | 2026-01-05 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:24.876692 | orchestrator | 2026-01-05 00:57:24 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:24.879898 | orchestrator | 2026-01-05 00:57:24 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:24.882051 | orchestrator | 2026-01-05 00:57:24 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:24.882101 | orchestrator | 2026-01-05 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:27.933729 | orchestrator | 2026-01-05 00:57:27 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:27.935801 | orchestrator | 2026-01-05 00:57:27 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:27.938197 | orchestrator | 2026-01-05 00:57:27 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:27.938318 | orchestrator | 2026-01-05 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:30.998245 | orchestrator | 2026-01-05 00:57:30 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:31.003491 | orchestrator | 2026-01-05 00:57:31 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:31.005605 | orchestrator | 2026-01-05 00:57:31 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:31.005991 | orchestrator | 2026-01-05 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:34.064599 | orchestrator | 2026-01-05 00:57:34 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:34.065833 | orchestrator | 2026-01-05 00:57:34 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:34.067350 | orchestrator | 2026-01-05 00:57:34 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:34.067523 | orchestrator | 2026-01-05 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:37.109817 | orchestrator | 2026-01-05 00:57:37 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:37.113230 | orchestrator | 2026-01-05 00:57:37 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:37.117297 | orchestrator | 2026-01-05 00:57:37 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:37.117370 | orchestrator | 2026-01-05 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:40.172970 | orchestrator | 2026-01-05 00:57:40 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state STARTED 2026-01-05 00:57:40.175714 | orchestrator | 2026-01-05 00:57:40 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:40.178549 | orchestrator | 2026-01-05 00:57:40 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:40.178603 | orchestrator | 2026-01-05 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:43.212418 | orchestrator | 2026-01-05 00:57:43 | INFO  | Task 90183beb-8910-42d0-a36b-44452bbed8b8 is in state SUCCESS 2026-01-05 00:57:43.213260 | orchestrator | 2026-01-05 00:57:43.213299 | orchestrator | 2026-01-05 00:57:43.213308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:57:43.213317 | orchestrator | 2026-01-05 00:57:43.213324 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:57:43.213332 | orchestrator | Monday 05 January 2026 00:54:36 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-01-05 00:57:43.213339 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:43.213372 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:43.213379 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:43.213386 | orchestrator | 2026-01-05 00:57:43.213392 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:57:43.213399 | orchestrator | Monday 05 January 2026 00:54:36 +0000 (0:00:00.320) 0:00:00.595 ******** 2026-01-05 00:57:43.213407 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-05 00:57:43.213414 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-05 00:57:43.213421 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-05 00:57:43.213428 | orchestrator | 2026-01-05 00:57:43.213434 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-05 00:57:43.213440 | orchestrator | 2026-01-05 00:57:43.213448 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:57:43.213455 | orchestrator | Monday 05 January 2026 00:54:36 +0000 (0:00:00.657) 0:00:01.253 ******** 2026-01-05 00:57:43.213462 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:43.213469 | orchestrator | 2026-01-05 00:57:43.213616 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-05 00:57:43.213630 | orchestrator | Monday 05 January 2026 00:54:37 +0000 (0:00:00.525) 0:00:01.779 ******** 2026-01-05 00:57:43.213636 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:57:43.213643 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:57:43.213650 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:57:43.213656 | orchestrator | 2026-01-05 00:57:43.213662 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-05 00:57:43.213670 | orchestrator | Monday 05 January 2026 00:54:39 +0000 (0:00:01.712) 0:00:03.491 ******** 2026-01-05 00:57:43.213681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.213705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.213723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.213740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.213748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.213760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.213768 | orchestrator | 2026-01-05 00:57:43.213775 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:57:43.213781 | orchestrator | Monday 05 January 2026 00:54:41 +0000 (0:00:02.248) 0:00:05.739 ******** 2026-01-05 00:57:43.213793 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:43.213800 | orchestrator | 2026-01-05 00:57:43.213807 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-05 00:57:43.213813 | orchestrator | Monday 05 January 2026 00:54:42 +0000 (0:00:00.939) 0:00:06.679 ******** 2026-01-05 00:57:43.213827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.213834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.213841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.213851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.213868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.213876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.213883 | orchestrator | 2026-01-05 00:57:43.213891 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-05 00:57:43.213897 | orchestrator | Monday 05 January 2026 00:54:45 +0000 (0:00:03.338) 0:00:10.018 ******** 2026-01-05 00:57:43.213904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:57:43.213915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:57:43.213936 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:43.213948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:57:43.213955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:57:43.213962 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:43.213970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:57:43.213980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:57:43.213991 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:43.213998 | orchestrator | 2026-01-05 00:57:43.214005 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-05 00:57:43.214049 | orchestrator | Monday 05 January 2026 00:54:47 +0000 (0:00:01.285) 0:00:11.303 ******** 2026-01-05 00:57:43.214062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:57:43.214070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:57:43.214077 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:43.214083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:57:43.214096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:57:43.214112 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:43.214123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:57:43.214131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:57:43.214139 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:43.214145 | orchestrator | 2026-01-05 00:57:43.214152 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-05 00:57:43.214160 | orchestrator | Monday 05 January 2026 00:54:47 +0000 (0:00:00.910) 0:00:12.213 ******** 2026-01-05 00:57:43.214167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.214179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.214191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.214203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.214211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.214270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.214284 | orchestrator | 2026-01-05 00:57:43.214292 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-05 00:57:43.214299 | orchestrator | Monday 05 January 2026 00:54:50 +0000 (0:00:02.386) 0:00:14.600 ******** 2026-01-05 00:57:43.214306 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:43.214313 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:43.214320 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:43.214327 | orchestrator | 2026-01-05 00:57:43.214335 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-05 00:57:43.214342 | orchestrator | Monday 05 January 2026 00:54:53 +0000 (0:00:02.707) 0:00:17.308 ******** 2026-01-05 00:57:43.214350 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:43.214357 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:43.214365 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:43.214372 | orchestrator | 2026-01-05 00:57:43.214379 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-05 00:57:43.214387 | orchestrator | Monday 05 January 2026 00:54:55 +0000 (0:00:02.339) 0:00:19.647 ******** 2026-01-05 00:57:43.214401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.214409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.214416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:43.214433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.214445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.214453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:43.214460 | orchestrator | 2026-01-05 00:57:43.214467 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:57:43.214475 | orchestrator | Monday 05 January 2026 00:54:57 +0000 (0:00:01.931) 0:00:21.579 ******** 2026-01-05 00:57:43.214483 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:43.214495 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:43.214502 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:43.214509 | orchestrator | 2026-01-05 00:57:43.214516 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 00:57:43.214523 | orchestrator | Monday 05 January 2026 00:54:57 +0000 (0:00:00.289) 0:00:21.868 ******** 2026-01-05 00:57:43.214530 | orchestrator | 2026-01-05 00:57:43.214537 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 00:57:43.214544 | orchestrator | Monday 05 January 2026 00:54:57 +0000 (0:00:00.068) 0:00:21.937 ******** 2026-01-05 00:57:43.214552 | orchestrator | 2026-01-05 00:57:43.214559 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 00:57:43.214567 | orchestrator | Monday 05 January 2026 00:54:57 +0000 (0:00:00.073) 0:00:22.011 ******** 2026-01-05 00:57:43.214574 | orchestrator | 2026-01-05 00:57:43.214581 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-05 00:57:43.214588 | orchestrator | Monday 05 January 2026 00:54:57 +0000 (0:00:00.073) 0:00:22.084 ******** 2026-01-05 00:57:43.214595 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:43.214603 | orchestrator | 2026-01-05 00:57:43.214610 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-05 00:57:43.214617 | orchestrator | Monday 05 January 2026 00:54:58 +0000 (0:00:00.916) 0:00:23.001 ******** 2026-01-05 00:57:43.214625 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:43.214632 | orchestrator | 2026-01-05 00:57:43.214640 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-05 00:57:43.214647 | orchestrator | Monday 05 January 2026 00:54:58 +0000 (0:00:00.238) 0:00:23.239 ******** 2026-01-05 00:57:43.214654 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:43.214662 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:43.214669 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:43.214676 | orchestrator | 2026-01-05 00:57:43.214687 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-05 00:57:43.214694 | orchestrator | Monday 05 January 2026 00:56:11 +0000 (0:01:12.653) 0:01:35.892 ******** 2026-01-05 00:57:43.214701 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:43.214709 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:43.214716 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:43.214722 | orchestrator | 2026-01-05 00:57:43.214728 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:57:43.214735 | orchestrator | Monday 05 January 2026 00:57:31 +0000 (0:01:19.897) 0:02:55.790 ******** 2026-01-05 00:57:43.214742 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:43.214749 | orchestrator | 2026-01-05 00:57:43.214755 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-05 00:57:43.214762 | orchestrator | Monday 05 January 2026 00:57:32 +0000 (0:00:00.747) 0:02:56.537 ******** 2026-01-05 00:57:43.214769 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:43.214775 | orchestrator | 2026-01-05 00:57:43.214783 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-05 00:57:43.214789 | orchestrator | Monday 05 January 2026 00:57:34 +0000 (0:00:02.398) 0:02:58.936 ******** 2026-01-05 00:57:43.214797 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:43.214804 | orchestrator | 2026-01-05 00:57:43.214811 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-05 00:57:43.214819 | orchestrator | Monday 05 January 2026 00:57:36 +0000 (0:00:02.252) 0:03:01.188 ******** 2026-01-05 00:57:43.214826 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:43.214833 | orchestrator | 2026-01-05 00:57:43.214840 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-05 00:57:43.214847 | orchestrator | Monday 05 January 2026 00:57:39 +0000 (0:00:02.739) 0:03:03.927 ******** 2026-01-05 00:57:43.214854 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:43.214871 | orchestrator | 2026-01-05 00:57:43.214883 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:57:43.214892 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 00:57:43.214902 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:57:43.214909 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:57:43.214916 | orchestrator | 2026-01-05 00:57:43.214923 | orchestrator | 2026-01-05 00:57:43.214930 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:57:43.214938 | orchestrator | Monday 05 January 2026 00:57:42 +0000 (0:00:02.459) 0:03:06.387 ******** 2026-01-05 00:57:43.214945 | orchestrator | =============================================================================== 2026-01-05 00:57:43.214952 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 79.90s 2026-01-05 00:57:43.214959 | orchestrator | opensearch : Restart opensearch container ------------------------------ 72.65s 2026-01-05 00:57:43.214967 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.34s 2026-01-05 00:57:43.214974 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.74s 2026-01-05 00:57:43.214982 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.71s 2026-01-05 00:57:43.214989 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.46s 2026-01-05 00:57:43.214996 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.40s 2026-01-05 00:57:43.215003 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.39s 2026-01-05 00:57:43.215011 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.34s 2026-01-05 00:57:43.215018 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.25s 2026-01-05 00:57:43.215025 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.25s 2026-01-05 00:57:43.215032 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.93s 2026-01-05 00:57:43.215040 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.71s 2026-01-05 00:57:43.215047 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.29s 2026-01-05 00:57:43.215055 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.94s 2026-01-05 00:57:43.215062 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.92s 2026-01-05 00:57:43.215069 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.91s 2026-01-05 00:57:43.215077 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2026-01-05 00:57:43.215084 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2026-01-05 00:57:43.215092 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-01-05 00:57:43.216583 | orchestrator | 2026-01-05 00:57:43 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:43.216637 | orchestrator | 2026-01-05 00:57:43 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state STARTED 2026-01-05 00:57:43.216645 | orchestrator | 2026-01-05 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:46.272553 | orchestrator | 2026-01-05 00:57:46 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:46.273647 | orchestrator | 2026-01-05 00:57:46 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:57:46.278582 | orchestrator | 2026-01-05 00:57:46 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:57:46.281942 | orchestrator | 2026-01-05 00:57:46 | INFO  | Task 31f03667-6c85-4b5b-955e-6cb39c66fe65 is in state SUCCESS 2026-01-05 00:57:46.284609 | orchestrator | 2026-01-05 00:57:46.284665 | orchestrator | 2026-01-05 00:57:46.284672 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-05 00:57:46.284677 | orchestrator | 2026-01-05 00:57:46.284681 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-05 00:57:46.284686 | orchestrator | Monday 05 January 2026 00:54:35 +0000 (0:00:00.098) 0:00:00.098 ******** 2026-01-05 00:57:46.284690 | orchestrator | ok: [localhost] => { 2026-01-05 00:57:46.284695 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-05 00:57:46.284699 | orchestrator | } 2026-01-05 00:57:46.284704 | orchestrator | 2026-01-05 00:57:46.284708 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-05 00:57:46.284712 | orchestrator | Monday 05 January 2026 00:54:35 +0000 (0:00:00.041) 0:00:00.140 ******** 2026-01-05 00:57:46.284716 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-05 00:57:46.284722 | orchestrator | ...ignoring 2026-01-05 00:57:46.284726 | orchestrator | 2026-01-05 00:57:46.284729 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-05 00:57:46.284733 | orchestrator | Monday 05 January 2026 00:54:38 +0000 (0:00:02.928) 0:00:03.069 ******** 2026-01-05 00:57:46.284737 | orchestrator | skipping: [localhost] 2026-01-05 00:57:46.284741 | orchestrator | 2026-01-05 00:57:46.284745 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-05 00:57:46.284748 | orchestrator | Monday 05 January 2026 00:54:38 +0000 (0:00:00.064) 0:00:03.133 ******** 2026-01-05 00:57:46.284761 | orchestrator | ok: [localhost] 2026-01-05 00:57:46.284765 | orchestrator | 2026-01-05 00:57:46.284769 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:57:46.284773 | orchestrator | 2026-01-05 00:57:46.284777 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:57:46.284780 | orchestrator | Monday 05 January 2026 00:54:39 +0000 (0:00:00.166) 0:00:03.300 ******** 2026-01-05 00:57:46.284784 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.284788 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:46.284792 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:46.284796 | orchestrator | 2026-01-05 00:57:46.284799 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:57:46.284803 | orchestrator | Monday 05 January 2026 00:54:39 +0000 (0:00:00.384) 0:00:03.685 ******** 2026-01-05 00:57:46.284807 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-05 00:57:46.284841 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-05 00:57:46.284846 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-05 00:57:46.284850 | orchestrator | 2026-01-05 00:57:46.285076 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-05 00:57:46.285084 | orchestrator | 2026-01-05 00:57:46.285088 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-05 00:57:46.285092 | orchestrator | Monday 05 January 2026 00:54:40 +0000 (0:00:00.869) 0:00:04.554 ******** 2026-01-05 00:57:46.285097 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:57:46.285101 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 00:57:46.285105 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 00:57:46.285109 | orchestrator | 2026-01-05 00:57:46.285113 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 00:57:46.285117 | orchestrator | Monday 05 January 2026 00:54:40 +0000 (0:00:00.399) 0:00:04.954 ******** 2026-01-05 00:57:46.285121 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:46.285142 | orchestrator | 2026-01-05 00:57:46.285146 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-05 00:57:46.285150 | orchestrator | Monday 05 January 2026 00:54:41 +0000 (0:00:00.696) 0:00:05.650 ******** 2026-01-05 00:57:46.285178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:57:46.285185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:57:46.285193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:57:46.285201 | orchestrator | 2026-01-05 00:57:46.285209 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-05 00:57:46.285249 | orchestrator | Monday 05 January 2026 00:54:45 +0000 (0:00:03.695) 0:00:09.345 ******** 2026-01-05 00:57:46.285253 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285258 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.285261 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285265 | orchestrator | 2026-01-05 00:57:46.285269 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-05 00:57:46.285273 | orchestrator | Monday 05 January 2026 00:54:45 +0000 (0:00:00.684) 0:00:10.030 ******** 2026-01-05 00:57:46.285276 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285280 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285284 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.285287 | orchestrator | 2026-01-05 00:57:46.285291 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-05 00:57:46.285295 | orchestrator | Monday 05 January 2026 00:54:47 +0000 (0:00:01.593) 0:00:11.624 ******** 2026-01-05 00:57:46.285299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:57:46.285316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:57:46.285321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:57:46.285329 | orchestrator | 2026-01-05 00:57:46.285333 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-05 00:57:46.285337 | orchestrator | Monday 05 January 2026 00:54:51 +0000 (0:00:03.632) 0:00:15.256 ******** 2026-01-05 00:57:46.285340 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285344 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285348 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.285352 | orchestrator | 2026-01-05 00:57:46.285355 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-05 00:57:46.285359 | orchestrator | Monday 05 January 2026 00:54:52 +0000 (0:00:01.330) 0:00:16.587 ******** 2026-01-05 00:57:46.285363 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.285367 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:46.285370 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:46.285374 | orchestrator | 2026-01-05 00:57:46.285378 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 00:57:46.285382 | orchestrator | Monday 05 January 2026 00:54:56 +0000 (0:00:04.317) 0:00:20.905 ******** 2026-01-05 00:57:46.285386 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:46.285390 | orchestrator | 2026-01-05 00:57:46.285394 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 00:57:46.285405 | orchestrator | Monday 05 January 2026 00:54:57 +0000 (0:00:00.561) 0:00:21.467 ******** 2026-01-05 00:57:46.285423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:57:46.285428 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.285432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:57:46.285439 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:57:46.285454 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285458 | orchestrator | 2026-01-05 00:57:46.285461 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 00:57:46.285465 | orchestrator | Monday 05 January 2026 00:55:01 +0000 (0:00:03.939) 0:00:25.406 ******** 2026-01-05 00:57:46.285469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:57:46.285477 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.285488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:57:46.285492 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:57:46.285511 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285517 | orchestrator | 2026-01-05 00:57:46.285523 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 00:57:46.285529 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:03.734) 0:00:29.141 ******** 2026-01-05 00:57:46.285546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:57:46.285554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:57:46.285566 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.285572 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:57:46.285585 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285591 | orchestrator | 2026-01-05 00:57:46.285597 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-05 00:57:46.285603 | orchestrator | Monday 05 January 2026 00:55:07 +0000 (0:00:03.032) 0:00:32.173 ******** 2026-01-05 00:57:46.285653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:57:46.285668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:57:46.285680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 00:57:46.285688 | orchestrator | 2026-01-05 00:57:46.285692 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-05 00:57:46.285696 | orchestrator | Monday 05 January 2026 00:55:11 +0000 (0:00:03.497) 0:00:35.671 ******** 2026-01-05 00:57:46.285701 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.285705 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:46.285710 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:46.285714 | orchestrator | 2026-01-05 00:57:46.285718 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-05 00:57:46.285723 | orchestrator | Monday 05 January 2026 00:55:12 +0000 (0:00:00.888) 0:00:36.559 ******** 2026-01-05 00:57:46.285727 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.285732 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:46.285737 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:46.285741 | orchestrator | 2026-01-05 00:57:46.285745 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-05 00:57:46.285750 | orchestrator | Monday 05 January 2026 00:55:12 +0000 (0:00:00.624) 0:00:37.184 ******** 2026-01-05 00:57:46.285755 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.285759 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:46.285763 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:46.285768 | orchestrator | 2026-01-05 00:57:46.285772 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-05 00:57:46.285777 | orchestrator | Monday 05 January 2026 00:55:13 +0000 (0:00:00.554) 0:00:37.738 ******** 2026-01-05 00:57:46.285782 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-05 00:57:46.285789 | orchestrator | ...ignoring 2026-01-05 00:57:46.285793 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-05 00:57:46.285798 | orchestrator | ...ignoring 2026-01-05 00:57:46.285802 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-05 00:57:46.285806 | orchestrator | ...ignoring 2026-01-05 00:57:46.285811 | orchestrator | 2026-01-05 00:57:46.285815 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-05 00:57:46.285820 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:10.937) 0:00:48.675 ******** 2026-01-05 00:57:46.285824 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.285828 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:46.285833 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:46.285837 | orchestrator | 2026-01-05 00:57:46.285841 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-05 00:57:46.285846 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:00.454) 0:00:49.130 ******** 2026-01-05 00:57:46.285850 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.285855 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285862 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285867 | orchestrator | 2026-01-05 00:57:46.285871 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-05 00:57:46.285876 | orchestrator | Monday 05 January 2026 00:55:25 +0000 (0:00:00.706) 0:00:49.836 ******** 2026-01-05 00:57:46.285880 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.285885 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285892 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285896 | orchestrator | 2026-01-05 00:57:46.285901 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-05 00:57:46.285905 | orchestrator | Monday 05 January 2026 00:55:26 +0000 (0:00:00.500) 0:00:50.337 ******** 2026-01-05 00:57:46.285910 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.285914 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285919 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285923 | orchestrator | 2026-01-05 00:57:46.285928 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-05 00:57:46.285935 | orchestrator | Monday 05 January 2026 00:55:26 +0000 (0:00:00.465) 0:00:50.802 ******** 2026-01-05 00:57:46.285940 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.285944 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:46.285949 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:46.285953 | orchestrator | 2026-01-05 00:57:46.285958 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-05 00:57:46.285962 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:00.454) 0:00:51.256 ******** 2026-01-05 00:57:46.285966 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.285971 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285975 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.285980 | orchestrator | 2026-01-05 00:57:46.285984 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 00:57:46.285989 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:00.707) 0:00:51.964 ******** 2026-01-05 00:57:46.285993 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.285997 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.286002 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-05 00:57:46.286006 | orchestrator | 2026-01-05 00:57:46.286055 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-05 00:57:46.286062 | orchestrator | Monday 05 January 2026 00:55:28 +0000 (0:00:00.460) 0:00:52.424 ******** 2026-01-05 00:57:46.286067 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.286071 | orchestrator | 2026-01-05 00:57:46.286076 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-05 00:57:46.286081 | orchestrator | Monday 05 January 2026 00:55:38 +0000 (0:00:10.627) 0:01:03.052 ******** 2026-01-05 00:57:46.286085 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.286090 | orchestrator | 2026-01-05 00:57:46.286094 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 00:57:46.286097 | orchestrator | Monday 05 January 2026 00:55:38 +0000 (0:00:00.138) 0:01:03.190 ******** 2026-01-05 00:57:46.286101 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.286105 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.286109 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.286112 | orchestrator | 2026-01-05 00:57:46.286116 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-05 00:57:46.286120 | orchestrator | Monday 05 January 2026 00:55:39 +0000 (0:00:00.998) 0:01:04.189 ******** 2026-01-05 00:57:46.286124 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.286127 | orchestrator | 2026-01-05 00:57:46.286131 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-05 00:57:46.286135 | orchestrator | Monday 05 January 2026 00:55:48 +0000 (0:00:08.093) 0:01:12.283 ******** 2026-01-05 00:57:46.286139 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.286146 | orchestrator | 2026-01-05 00:57:46.286150 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-05 00:57:46.286154 | orchestrator | Monday 05 January 2026 00:55:50 +0000 (0:00:02.582) 0:01:14.866 ******** 2026-01-05 00:57:46.286158 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.286162 | orchestrator | 2026-01-05 00:57:46.286166 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-05 00:57:46.286169 | orchestrator | Monday 05 January 2026 00:55:53 +0000 (0:00:02.597) 0:01:17.463 ******** 2026-01-05 00:57:46.286173 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.286177 | orchestrator | 2026-01-05 00:57:46.286181 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-05 00:57:46.286185 | orchestrator | Monday 05 January 2026 00:55:53 +0000 (0:00:00.134) 0:01:17.597 ******** 2026-01-05 00:57:46.286188 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.286192 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.286196 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.286200 | orchestrator | 2026-01-05 00:57:46.286203 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-05 00:57:46.286207 | orchestrator | Monday 05 January 2026 00:55:53 +0000 (0:00:00.330) 0:01:17.928 ******** 2026-01-05 00:57:46.286229 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.286233 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-05 00:57:46.286237 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:46.286241 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:46.286244 | orchestrator | 2026-01-05 00:57:46.286248 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-05 00:57:46.286252 | orchestrator | skipping: no hosts matched 2026-01-05 00:57:46.286256 | orchestrator | 2026-01-05 00:57:46.286260 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 00:57:46.286263 | orchestrator | 2026-01-05 00:57:46.286267 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 00:57:46.286271 | orchestrator | Monday 05 January 2026 00:55:54 +0000 (0:00:00.648) 0:01:18.577 ******** 2026-01-05 00:57:46.286274 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:46.286278 | orchestrator | 2026-01-05 00:57:46.286282 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 00:57:46.286285 | orchestrator | Monday 05 January 2026 00:56:18 +0000 (0:00:24.314) 0:01:42.892 ******** 2026-01-05 00:57:46.286289 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:46.286293 | orchestrator | 2026-01-05 00:57:46.286296 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 00:57:46.286300 | orchestrator | Monday 05 January 2026 00:56:29 +0000 (0:00:10.628) 0:01:53.521 ******** 2026-01-05 00:57:46.286304 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:46.286308 | orchestrator | 2026-01-05 00:57:46.286315 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 00:57:46.286318 | orchestrator | 2026-01-05 00:57:46.286322 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 00:57:46.286326 | orchestrator | Monday 05 January 2026 00:56:31 +0000 (0:00:02.662) 0:01:56.184 ******** 2026-01-05 00:57:46.286329 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:46.286333 | orchestrator | 2026-01-05 00:57:46.286337 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 00:57:46.286345 | orchestrator | Monday 05 January 2026 00:56:51 +0000 (0:00:19.085) 0:02:15.269 ******** 2026-01-05 00:57:46.286349 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:46.286353 | orchestrator | 2026-01-05 00:57:46.286357 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 00:57:46.286361 | orchestrator | Monday 05 January 2026 00:57:06 +0000 (0:00:15.617) 0:02:30.886 ******** 2026-01-05 00:57:46.286364 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:46.286369 | orchestrator | 2026-01-05 00:57:46.286375 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-05 00:57:46.286390 | orchestrator | 2026-01-05 00:57:46.286399 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 00:57:46.286405 | orchestrator | Monday 05 January 2026 00:57:09 +0000 (0:00:02.678) 0:02:33.565 ******** 2026-01-05 00:57:46.286411 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.286417 | orchestrator | 2026-01-05 00:57:46.286423 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 00:57:46.286444 | orchestrator | Monday 05 January 2026 00:57:27 +0000 (0:00:17.902) 0:02:51.468 ******** 2026-01-05 00:57:46.286451 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.286457 | orchestrator | 2026-01-05 00:57:46.286463 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 00:57:46.286469 | orchestrator | Monday 05 January 2026 00:57:27 +0000 (0:00:00.593) 0:02:52.061 ******** 2026-01-05 00:57:46.286476 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.286482 | orchestrator | 2026-01-05 00:57:46.286488 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-05 00:57:46.286494 | orchestrator | 2026-01-05 00:57:46.286500 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-05 00:57:46.286504 | orchestrator | Monday 05 January 2026 00:57:30 +0000 (0:00:02.861) 0:02:54.923 ******** 2026-01-05 00:57:46.286508 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:46.286511 | orchestrator | 2026-01-05 00:57:46.286515 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-05 00:57:46.286519 | orchestrator | Monday 05 January 2026 00:57:31 +0000 (0:00:00.549) 0:02:55.473 ******** 2026-01-05 00:57:46.286523 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.286526 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.286530 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.286534 | orchestrator | 2026-01-05 00:57:46.286538 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-05 00:57:46.286542 | orchestrator | Monday 05 January 2026 00:57:33 +0000 (0:00:02.424) 0:02:57.897 ******** 2026-01-05 00:57:46.286545 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.286549 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.286553 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.286557 | orchestrator | 2026-01-05 00:57:46.286561 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-05 00:57:46.286564 | orchestrator | Monday 05 January 2026 00:57:35 +0000 (0:00:02.173) 0:03:00.071 ******** 2026-01-05 00:57:46.286568 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.286572 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.286576 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.286580 | orchestrator | 2026-01-05 00:57:46.286583 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-05 00:57:46.286587 | orchestrator | Monday 05 January 2026 00:57:38 +0000 (0:00:02.247) 0:03:02.318 ******** 2026-01-05 00:57:46.286591 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.286594 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.286598 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:46.286602 | orchestrator | 2026-01-05 00:57:46.286605 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-05 00:57:46.286609 | orchestrator | Monday 05 January 2026 00:57:40 +0000 (0:00:02.232) 0:03:04.551 ******** 2026-01-05 00:57:46.286613 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:46.286617 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:46.286621 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:46.286624 | orchestrator | 2026-01-05 00:57:46.286628 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-05 00:57:46.286632 | orchestrator | Monday 05 January 2026 00:57:43 +0000 (0:00:02.813) 0:03:07.364 ******** 2026-01-05 00:57:46.286636 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:46.286639 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:46.286648 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:46.286652 | orchestrator | 2026-01-05 00:57:46.286656 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:57:46.286660 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-05 00:57:46.286664 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-05 00:57:46.286669 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-05 00:57:46.286673 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-05 00:57:46.286677 | orchestrator | 2026-01-05 00:57:46.286681 | orchestrator | 2026-01-05 00:57:46.286689 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:57:46.286693 | orchestrator | Monday 05 January 2026 00:57:43 +0000 (0:00:00.218) 0:03:07.583 ******** 2026-01-05 00:57:46.286698 | orchestrator | =============================================================================== 2026-01-05 00:57:46.286707 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.40s 2026-01-05 00:57:46.286716 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.25s 2026-01-05 00:57:46.286728 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.90s 2026-01-05 00:57:46.286734 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.94s 2026-01-05 00:57:46.286740 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.63s 2026-01-05 00:57:46.286745 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.09s 2026-01-05 00:57:46.286751 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.34s 2026-01-05 00:57:46.286757 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.32s 2026-01-05 00:57:46.286763 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.94s 2026-01-05 00:57:46.286769 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.73s 2026-01-05 00:57:46.286775 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.70s 2026-01-05 00:57:46.286780 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.63s 2026-01-05 00:57:46.286786 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.50s 2026-01-05 00:57:46.286792 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.03s 2026-01-05 00:57:46.286798 | orchestrator | Check MariaDB service --------------------------------------------------- 2.93s 2026-01-05 00:57:46.286804 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.86s 2026-01-05 00:57:46.286821 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.81s 2026-01-05 00:57:46.286828 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.60s 2026-01-05 00:57:46.286834 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.58s 2026-01-05 00:57:46.286840 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.42s 2026-01-05 00:57:46.286846 | orchestrator | 2026-01-05 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:49.340435 | orchestrator | 2026-01-05 00:57:49 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:49.341977 | orchestrator | 2026-01-05 00:57:49 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:57:49.345036 | orchestrator | 2026-01-05 00:57:49 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:57:49.345125 | orchestrator | 2026-01-05 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:52.379673 | orchestrator | 2026-01-05 00:57:52 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:52.381044 | orchestrator | 2026-01-05 00:57:52 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:57:52.383646 | orchestrator | 2026-01-05 00:57:52 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:57:52.383711 | orchestrator | 2026-01-05 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:55.431876 | orchestrator | 2026-01-05 00:57:55 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:55.432368 | orchestrator | 2026-01-05 00:57:55 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:57:55.433080 | orchestrator | 2026-01-05 00:57:55 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:57:55.433180 | orchestrator | 2026-01-05 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:58.478722 | orchestrator | 2026-01-05 00:57:58 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:57:58.478953 | orchestrator | 2026-01-05 00:57:58 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:57:58.480277 | orchestrator | 2026-01-05 00:57:58 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:57:58.480349 | orchestrator | 2026-01-05 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:01.531380 | orchestrator | 2026-01-05 00:58:01 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:01.533128 | orchestrator | 2026-01-05 00:58:01 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:01.534409 | orchestrator | 2026-01-05 00:58:01 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:01.534486 | orchestrator | 2026-01-05 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:04.573103 | orchestrator | 2026-01-05 00:58:04 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:04.575376 | orchestrator | 2026-01-05 00:58:04 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:04.579410 | orchestrator | 2026-01-05 00:58:04 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:04.579471 | orchestrator | 2026-01-05 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:07.614771 | orchestrator | 2026-01-05 00:58:07 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:07.615237 | orchestrator | 2026-01-05 00:58:07 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:07.616255 | orchestrator | 2026-01-05 00:58:07 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:07.616478 | orchestrator | 2026-01-05 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:10.653205 | orchestrator | 2026-01-05 00:58:10 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:10.653633 | orchestrator | 2026-01-05 00:58:10 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:10.654547 | orchestrator | 2026-01-05 00:58:10 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:10.654606 | orchestrator | 2026-01-05 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:13.694262 | orchestrator | 2026-01-05 00:58:13 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:13.695010 | orchestrator | 2026-01-05 00:58:13 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:13.696708 | orchestrator | 2026-01-05 00:58:13 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:13.696759 | orchestrator | 2026-01-05 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:16.735634 | orchestrator | 2026-01-05 00:58:16 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:16.739727 | orchestrator | 2026-01-05 00:58:16 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:16.741492 | orchestrator | 2026-01-05 00:58:16 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:16.741601 | orchestrator | 2026-01-05 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:19.798064 | orchestrator | 2026-01-05 00:58:19 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:19.801226 | orchestrator | 2026-01-05 00:58:19 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:19.802727 | orchestrator | 2026-01-05 00:58:19 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:19.802783 | orchestrator | 2026-01-05 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:22.847306 | orchestrator | 2026-01-05 00:58:22 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:22.848506 | orchestrator | 2026-01-05 00:58:22 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:22.849888 | orchestrator | 2026-01-05 00:58:22 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:22.849939 | orchestrator | 2026-01-05 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:25.897750 | orchestrator | 2026-01-05 00:58:25 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:25.900702 | orchestrator | 2026-01-05 00:58:25 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:25.901991 | orchestrator | 2026-01-05 00:58:25 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:25.902323 | orchestrator | 2026-01-05 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:28.946781 | orchestrator | 2026-01-05 00:58:28 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:28.946880 | orchestrator | 2026-01-05 00:58:28 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:28.948008 | orchestrator | 2026-01-05 00:58:28 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:28.948096 | orchestrator | 2026-01-05 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:31.996301 | orchestrator | 2026-01-05 00:58:31 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:31.997260 | orchestrator | 2026-01-05 00:58:31 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:31.999087 | orchestrator | 2026-01-05 00:58:31 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:31.999185 | orchestrator | 2026-01-05 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:35.048375 | orchestrator | 2026-01-05 00:58:35 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:35.049515 | orchestrator | 2026-01-05 00:58:35 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:35.051219 | orchestrator | 2026-01-05 00:58:35 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:35.051248 | orchestrator | 2026-01-05 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:38.093672 | orchestrator | 2026-01-05 00:58:38 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:38.097807 | orchestrator | 2026-01-05 00:58:38 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:38.099400 | orchestrator | 2026-01-05 00:58:38 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:38.099455 | orchestrator | 2026-01-05 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:41.150397 | orchestrator | 2026-01-05 00:58:41 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:41.153452 | orchestrator | 2026-01-05 00:58:41 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:41.156250 | orchestrator | 2026-01-05 00:58:41 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:41.156947 | orchestrator | 2026-01-05 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:44.200887 | orchestrator | 2026-01-05 00:58:44 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:44.201842 | orchestrator | 2026-01-05 00:58:44 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:44.203017 | orchestrator | 2026-01-05 00:58:44 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:44.203701 | orchestrator | 2026-01-05 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:47.255604 | orchestrator | 2026-01-05 00:58:47 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:47.257754 | orchestrator | 2026-01-05 00:58:47 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:47.259599 | orchestrator | 2026-01-05 00:58:47 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:47.259835 | orchestrator | 2026-01-05 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:50.301525 | orchestrator | 2026-01-05 00:58:50 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state STARTED 2026-01-05 00:58:50.305170 | orchestrator | 2026-01-05 00:58:50 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:50.307605 | orchestrator | 2026-01-05 00:58:50 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:50.307766 | orchestrator | 2026-01-05 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:53.363730 | orchestrator | 2026-01-05 00:58:53 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:58:53.367118 | orchestrator | 2026-01-05 00:58:53 | INFO  | Task 4abc22f7-b792-43c4-920c-8c3c67a596bf is in state SUCCESS 2026-01-05 00:58:53.368948 | orchestrator | 2026-01-05 00:58:53.369036 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:58:53.369050 | orchestrator | 2.16.14 2026-01-05 00:58:53.369061 | orchestrator | 2026-01-05 00:58:53.369125 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-05 00:58:53.369163 | orchestrator | 2026-01-05 00:58:53.369173 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-05 00:58:53.369182 | orchestrator | Monday 05 January 2026 00:56:37 +0000 (0:00:00.630) 0:00:00.630 ******** 2026-01-05 00:58:53.369192 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:58:53.369201 | orchestrator | 2026-01-05 00:58:53.369210 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-05 00:58:53.369236 | orchestrator | Monday 05 January 2026 00:56:38 +0000 (0:00:00.757) 0:00:01.388 ******** 2026-01-05 00:58:53.369245 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.369255 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.369265 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.369274 | orchestrator | 2026-01-05 00:58:53.369285 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-05 00:58:53.369294 | orchestrator | Monday 05 January 2026 00:56:39 +0000 (0:00:00.636) 0:00:02.025 ******** 2026-01-05 00:58:53.369304 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.369313 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.369319 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.369325 | orchestrator | 2026-01-05 00:58:53.369331 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-05 00:58:53.369337 | orchestrator | Monday 05 January 2026 00:56:39 +0000 (0:00:00.323) 0:00:02.349 ******** 2026-01-05 00:58:53.369343 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.369349 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.369355 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.369364 | orchestrator | 2026-01-05 00:58:53.369373 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-05 00:58:53.369381 | orchestrator | Monday 05 January 2026 00:56:40 +0000 (0:00:00.839) 0:00:03.188 ******** 2026-01-05 00:58:53.369394 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.369406 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.369415 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.369425 | orchestrator | 2026-01-05 00:58:53.369434 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-05 00:58:53.369442 | orchestrator | Monday 05 January 2026 00:56:40 +0000 (0:00:00.360) 0:00:03.548 ******** 2026-01-05 00:58:53.369611 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.369623 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.369633 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.369657 | orchestrator | 2026-01-05 00:58:53.369668 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-05 00:58:53.369679 | orchestrator | Monday 05 January 2026 00:56:41 +0000 (0:00:00.330) 0:00:03.879 ******** 2026-01-05 00:58:53.369703 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.369713 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.369722 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.369732 | orchestrator | 2026-01-05 00:58:53.369756 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-05 00:58:53.369767 | orchestrator | Monday 05 January 2026 00:56:41 +0000 (0:00:00.337) 0:00:04.216 ******** 2026-01-05 00:58:53.369778 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.369790 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.369816 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.369827 | orchestrator | 2026-01-05 00:58:53.369839 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-05 00:58:53.369865 | orchestrator | Monday 05 January 2026 00:56:41 +0000 (0:00:00.535) 0:00:04.752 ******** 2026-01-05 00:58:53.369876 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.369886 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.369896 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.369921 | orchestrator | 2026-01-05 00:58:53.369931 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-05 00:58:53.370519 | orchestrator | Monday 05 January 2026 00:56:42 +0000 (0:00:00.306) 0:00:05.059 ******** 2026-01-05 00:58:53.370543 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:58:53.370550 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:58:53.370567 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:58:53.370574 | orchestrator | 2026-01-05 00:58:53.370581 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-05 00:58:53.370587 | orchestrator | Monday 05 January 2026 00:56:42 +0000 (0:00:00.667) 0:00:05.726 ******** 2026-01-05 00:58:53.370592 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.370610 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.370616 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.370621 | orchestrator | 2026-01-05 00:58:53.370628 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-05 00:58:53.370634 | orchestrator | Monday 05 January 2026 00:56:43 +0000 (0:00:00.447) 0:00:06.173 ******** 2026-01-05 00:58:53.370649 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:58:53.370924 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:58:53.370951 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:58:53.370962 | orchestrator | 2026-01-05 00:58:53.370972 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-05 00:58:53.370993 | orchestrator | Monday 05 January 2026 00:56:45 +0000 (0:00:02.219) 0:00:08.392 ******** 2026-01-05 00:58:53.371004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:58:53.371014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:58:53.371024 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:58:53.371047 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.371057 | orchestrator | 2026-01-05 00:58:53.371159 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-05 00:58:53.371173 | orchestrator | Monday 05 January 2026 00:56:46 +0000 (0:00:00.663) 0:00:09.056 ******** 2026-01-05 00:58:53.371187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.371224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.371234 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.371260 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.371271 | orchestrator | 2026-01-05 00:58:53.371281 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-05 00:58:53.371304 | orchestrator | Monday 05 January 2026 00:56:47 +0000 (0:00:00.885) 0:00:09.941 ******** 2026-01-05 00:58:53.371319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.371334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.371420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.371534 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.371565 | orchestrator | 2026-01-05 00:58:53.371577 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-05 00:58:53.371587 | orchestrator | Monday 05 January 2026 00:56:47 +0000 (0:00:00.370) 0:00:10.312 ******** 2026-01-05 00:58:53.371617 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '172d64bb54a3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-05 00:56:44.050024', 'end': '2026-01-05 00:56:44.090754', 'delta': '0:00:00.040730', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['172d64bb54a3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-05 00:58:53.371629 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'abbb29a068db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-05 00:56:44.810547', 'end': '2026-01-05 00:56:44.869010', 'delta': '0:00:00.058463', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['abbb29a068db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-05 00:58:53.371689 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '449bff127ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-05 00:56:45.366837', 'end': '2026-01-05 00:56:45.417025', 'delta': '0:00:00.050188', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['449bff127ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-05 00:58:53.371698 | orchestrator | 2026-01-05 00:58:53.371714 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-05 00:58:53.371721 | orchestrator | Monday 05 January 2026 00:56:47 +0000 (0:00:00.216) 0:00:10.528 ******** 2026-01-05 00:58:53.371727 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.371733 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.371740 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.371745 | orchestrator | 2026-01-05 00:58:53.371751 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-05 00:58:53.371767 | orchestrator | Monday 05 January 2026 00:56:48 +0000 (0:00:00.459) 0:00:10.988 ******** 2026-01-05 00:58:53.371773 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-05 00:58:53.371788 | orchestrator | 2026-01-05 00:58:53.371794 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-05 00:58:53.371800 | orchestrator | Monday 05 January 2026 00:56:50 +0000 (0:00:02.245) 0:00:13.233 ******** 2026-01-05 00:58:53.371815 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.371821 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.371827 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.371833 | orchestrator | 2026-01-05 00:58:53.371839 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-05 00:58:53.371844 | orchestrator | Monday 05 January 2026 00:56:50 +0000 (0:00:00.362) 0:00:13.596 ******** 2026-01-05 00:58:53.371850 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.371856 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.371874 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.371884 | orchestrator | 2026-01-05 00:58:53.371892 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 00:58:53.371901 | orchestrator | Monday 05 January 2026 00:56:51 +0000 (0:00:00.470) 0:00:14.067 ******** 2026-01-05 00:58:53.372023 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.372034 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.372040 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.372047 | orchestrator | 2026-01-05 00:58:53.372053 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-05 00:58:53.372069 | orchestrator | Monday 05 January 2026 00:56:51 +0000 (0:00:00.499) 0:00:14.567 ******** 2026-01-05 00:58:53.372135 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.372141 | orchestrator | 2026-01-05 00:58:53.372148 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-05 00:58:53.372155 | orchestrator | Monday 05 January 2026 00:56:51 +0000 (0:00:00.134) 0:00:14.701 ******** 2026-01-05 00:58:53.372161 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.372167 | orchestrator | 2026-01-05 00:58:53.372173 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 00:58:53.372179 | orchestrator | Monday 05 January 2026 00:56:52 +0000 (0:00:00.233) 0:00:14.935 ******** 2026-01-05 00:58:53.372195 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.372201 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.372206 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.372212 | orchestrator | 2026-01-05 00:58:53.372218 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-05 00:58:53.372224 | orchestrator | Monday 05 January 2026 00:56:52 +0000 (0:00:00.308) 0:00:15.243 ******** 2026-01-05 00:58:53.372238 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.372244 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.372250 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.372256 | orchestrator | 2026-01-05 00:58:53.372262 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-05 00:58:53.372278 | orchestrator | Monday 05 January 2026 00:56:52 +0000 (0:00:00.369) 0:00:15.613 ******** 2026-01-05 00:58:53.372284 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.372290 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.372296 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.372302 | orchestrator | 2026-01-05 00:58:53.372308 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-05 00:58:53.372325 | orchestrator | Monday 05 January 2026 00:56:53 +0000 (0:00:00.558) 0:00:16.171 ******** 2026-01-05 00:58:53.372335 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.372343 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.372367 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.372377 | orchestrator | 2026-01-05 00:58:53.372386 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-05 00:58:53.372408 | orchestrator | Monday 05 January 2026 00:56:53 +0000 (0:00:00.346) 0:00:16.518 ******** 2026-01-05 00:58:53.372418 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.372503 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.372515 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.372525 | orchestrator | 2026-01-05 00:58:53.372535 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-05 00:58:53.372545 | orchestrator | Monday 05 January 2026 00:56:54 +0000 (0:00:00.330) 0:00:16.849 ******** 2026-01-05 00:58:53.372554 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.372564 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.372573 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.372637 | orchestrator | 2026-01-05 00:58:53.372651 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-05 00:58:53.372663 | orchestrator | Monday 05 January 2026 00:56:54 +0000 (0:00:00.333) 0:00:17.182 ******** 2026-01-05 00:58:53.372673 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.372683 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.372690 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.372697 | orchestrator | 2026-01-05 00:58:53.372705 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-05 00:58:53.372712 | orchestrator | Monday 05 January 2026 00:56:54 +0000 (0:00:00.552) 0:00:17.735 ******** 2026-01-05 00:58:53.372728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6123202--7d2d--5b15--b15a--b013203adbfc-osd--block--f6123202--7d2d--5b15--b15a--b013203adbfc', 'dm-uuid-LVM-gSvEmzN4sR9qQBYCmcrvBPRZDc8ahtdz7QNh6Z7yAClPqMCMIbCPf8VhzgZxO5zo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21-osd--block--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21', 'dm-uuid-LVM-MiZyFfsPoyjf4UhEA6dyhdxf8Nt4buWcB0XMxgbd6nRp4y3WboeXGvfpk5cHIS0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.372920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.372974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--846bb30c--958c--57a2--8682--0625433ec757-osd--block--846bb30c--958c--57a2--8682--0625433ec757', 'dm-uuid-LVM-WytFOHQK3TrfIaOFPVQ0VV2bPy4iCg1x50pstBe59FSIXJ1gqkDnGo60OOnA4yLO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f6123202--7d2d--5b15--b15a--b013203adbfc-osd--block--f6123202--7d2d--5b15--b15a--b013203adbfc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-16yBjx-FwIA-tBBg-2Dng-Ip0w-C2XU-Haljpf', 'scsi-0QEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4', 'scsi-SQEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be99b097--8f9c--5b18--b9e6--1dc57f49383d-osd--block--be99b097--8f9c--5b18--b9e6--1dc57f49383d', 'dm-uuid-LVM-qPfa1lYL90pRKqe9QP0OQgRUjxiwecdBx92dXsfZGMyB7zYsWbhzbfqkgaiYUwfs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21-osd--block--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J9EP0q-zA7u-Zh2T-PDAd-IujH-Rp2z-NGEN8T', 'scsi-0QEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392', 'scsi-SQEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0', 'scsi-SQEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part1', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part14', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part15', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part16', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--846bb30c--958c--57a2--8682--0625433ec757-osd--block--846bb30c--958c--57a2--8682--0625433ec757'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lgxM7L-2H2s-ydZZ-G3Mt-VVkw-Jptq-qugIyB', 'scsi-0QEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9', 'scsi-SQEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--be99b097--8f9c--5b18--b9e6--1dc57f49383d-osd--block--be99b097--8f9c--5b18--b9e6--1dc57f49383d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JtpNGr-twnR-Z5N1-ELuq-SfMI-3xi9-STF9tw', 'scsi-0QEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff', 'scsi-SQEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373346 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.373355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302', 'scsi-SQEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373384 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.373393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c427200--cd92--5345--a12e--93ab1a68a0a9-osd--block--8c427200--cd92--5345--a12e--93ab1a68a0a9', 'dm-uuid-LVM-yftGaJfF3fAOG2rIDGE3fDbcvFqQc3krVsVongDe66YEBcfSeoCfwGjB54VjJdci'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0a3b48c--8251--5295--95c4--04cb80bcb769-osd--block--f0a3b48c--8251--5295--95c4--04cb80bcb769', 'dm-uuid-LVM-Qfmqg5JUUSt7eCfNBoqOJHNYrALv8lFXgkFwVgtPuBbxsgTPXNNDi25IhISi2UCn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 00:58:53.373527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part1', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part14', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part15', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part16', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8c427200--cd92--5345--a12e--93ab1a68a0a9-osd--block--8c427200--cd92--5345--a12e--93ab1a68a0a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NHu62d-t11c-UK62-E30C-U5Oe-QyNU-2jm3BJ', 'scsi-0QEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52', 'scsi-SQEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f0a3b48c--8251--5295--95c4--04cb80bcb769-osd--block--f0a3b48c--8251--5295--95c4--04cb80bcb769'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZlCbJ8-1XNk-wRmZ-rsfx-5dxN-dsVr-H6mV0e', 'scsi-0QEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3', 'scsi-SQEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c', 'scsi-SQEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 00:58:53.373800 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.373808 | orchestrator | 2026-01-05 00:58:53.373821 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-05 00:58:53.373834 | orchestrator | Monday 05 January 2026 00:56:55 +0000 (0:00:00.618) 0:00:18.353 ******** 2026-01-05 00:58:53.373846 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6123202--7d2d--5b15--b15a--b013203adbfc-osd--block--f6123202--7d2d--5b15--b15a--b013203adbfc', 'dm-uuid-LVM-gSvEmzN4sR9qQBYCmcrvBPRZDc8ahtdz7QNh6Z7yAClPqMCMIbCPf8VhzgZxO5zo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.373855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21-osd--block--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21', 'dm-uuid-LVM-MiZyFfsPoyjf4UhEA6dyhdxf8Nt4buWcB0XMxgbd6nRp4y3WboeXGvfpk5cHIS0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.373868 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.373873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.373879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.373893 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.373923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--846bb30c--958c--57a2--8682--0625433ec757-osd--block--846bb30c--958c--57a2--8682--0625433ec757', 'dm-uuid-LVM-WytFOHQK3TrfIaOFPVQ0VV2bPy4iCg1x50pstBe59FSIXJ1gqkDnGo60OOnA4yLO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.373998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374046 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be99b097--8f9c--5b18--b9e6--1dc57f49383d-osd--block--be99b097--8f9c--5b18--b9e6--1dc57f49383d', 'dm-uuid-LVM-qPfa1lYL90pRKqe9QP0OQgRUjxiwecdBx92dXsfZGMyB7zYsWbhzbfqkgaiYUwfs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374056 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374062 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374115 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374128 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374141 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a9aa3c7-7c88-41e1-87a4-5a9cdf824a11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374156 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f6123202--7d2d--5b15--b15a--b013203adbfc-osd--block--f6123202--7d2d--5b15--b15a--b013203adbfc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-16yBjx-FwIA-tBBg-2Dng-Ip0w-C2XU-Haljpf', 'scsi-0QEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4', 'scsi-SQEMU_QEMU_HARDDISK_2b1c1f48-cee6-4c03-87f8-c43c8286bcc4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21-osd--block--6549b2e5--b8c2--5b01--a1b7--5ee8ee491b21'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J9EP0q-zA7u-Zh2T-PDAd-IujH-Rp2z-NGEN8T', 'scsi-0QEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392', 'scsi-SQEMU_QEMU_HARDDISK_b9761713-1df3-4432-b4b3-360f49d55392'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0', 'scsi-SQEMU_QEMU_HARDDISK_383c4b06-6a59-4554-8f50-cd156928eda0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374207 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374271 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part1', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part14', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part15', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part16', 'scsi-SQEMU_QEMU_HARDDISK_796c4eb8-0610-4712-8614-781cad59caeb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374279 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--846bb30c--958c--57a2--8682--0625433ec757-osd--block--846bb30c--958c--57a2--8682--0625433ec757'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lgxM7L-2H2s-ydZZ-G3Mt-VVkw-Jptq-qugIyB', 'scsi-0QEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9', 'scsi-SQEMU_QEMU_HARDDISK_f8dcabc6-fabd-45fd-9c41-4607b08934e9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374291 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--be99b097--8f9c--5b18--b9e6--1dc57f49383d-osd--block--be99b097--8f9c--5b18--b9e6--1dc57f49383d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JtpNGr-twnR-Z5N1-ELuq-SfMI-3xi9-STF9tw', 'scsi-0QEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff', 'scsi-SQEMU_QEMU_HARDDISK_b121dca3-24d1-4b7b-930a-60908a09b3ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302', 'scsi-SQEMU_QEMU_HARDDISK_891598f0-de5b-4bdc-89c5-6a431d2de302'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374302 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.374315 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374321 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.374330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c427200--cd92--5345--a12e--93ab1a68a0a9-osd--block--8c427200--cd92--5345--a12e--93ab1a68a0a9', 'dm-uuid-LVM-yftGaJfF3fAOG2rIDGE3fDbcvFqQc3krVsVongDe66YEBcfSeoCfwGjB54VjJdci'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0a3b48c--8251--5295--95c4--04cb80bcb769-osd--block--f0a3b48c--8251--5295--95c4--04cb80bcb769', 'dm-uuid-LVM-Qfmqg5JUUSt7eCfNBoqOJHNYrALv8lFXgkFwVgtPuBbxsgTPXNNDi25IhISi2UCn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374355 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374361 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374383 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374398 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374409 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374423 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374444 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part1', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part14', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part15', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part16', 'scsi-SQEMU_QEMU_HARDDISK_4705a6e7-7472-4153-8b28-61d97fe23078-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374463 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8c427200--cd92--5345--a12e--93ab1a68a0a9-osd--block--8c427200--cd92--5345--a12e--93ab1a68a0a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NHu62d-t11c-UK62-E30C-U5Oe-QyNU-2jm3BJ', 'scsi-0QEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52', 'scsi-SQEMU_QEMU_HARDDISK_b0dfd45a-7f89-49c0-be70-a4c437682b52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f0a3b48c--8251--5295--95c4--04cb80bcb769-osd--block--f0a3b48c--8251--5295--95c4--04cb80bcb769'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZlCbJ8-1XNk-wRmZ-rsfx-5dxN-dsVr-H6mV0e', 'scsi-0QEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3', 'scsi-SQEMU_QEMU_HARDDISK_03b7017e-e1b0-457d-9587-8b11f2102bb3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374483 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c', 'scsi-SQEMU_QEMU_HARDDISK_ecd5c862-499b-48c6-9c2d-7fcffc72f10c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374500 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 00:58:53.374516 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.374524 | orchestrator | 2026-01-05 00:58:53.374533 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-05 00:58:53.374546 | orchestrator | Monday 05 January 2026 00:56:56 +0000 (0:00:00.760) 0:00:19.114 ******** 2026-01-05 00:58:53.374555 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.374565 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.374574 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.374582 | orchestrator | 2026-01-05 00:58:53.374588 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-05 00:58:53.374594 | orchestrator | Monday 05 January 2026 00:56:57 +0000 (0:00:00.827) 0:00:19.941 ******** 2026-01-05 00:58:53.374599 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.374605 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.374610 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.374615 | orchestrator | 2026-01-05 00:58:53.374621 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 00:58:53.374626 | orchestrator | Monday 05 January 2026 00:56:57 +0000 (0:00:00.513) 0:00:20.455 ******** 2026-01-05 00:58:53.374632 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.374637 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.374642 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.374648 | orchestrator | 2026-01-05 00:58:53.374653 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 00:58:53.374659 | orchestrator | Monday 05 January 2026 00:56:58 +0000 (0:00:00.686) 0:00:21.141 ******** 2026-01-05 00:58:53.374664 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.374670 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.374675 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.374681 | orchestrator | 2026-01-05 00:58:53.374686 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 00:58:53.374692 | orchestrator | Monday 05 January 2026 00:56:58 +0000 (0:00:00.315) 0:00:21.457 ******** 2026-01-05 00:58:53.374699 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.374708 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.374716 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.374725 | orchestrator | 2026-01-05 00:58:53.374734 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 00:58:53.374743 | orchestrator | Monday 05 January 2026 00:56:59 +0000 (0:00:00.485) 0:00:21.942 ******** 2026-01-05 00:58:53.374751 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.374760 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.374769 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.374778 | orchestrator | 2026-01-05 00:58:53.374787 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-05 00:58:53.374798 | orchestrator | Monday 05 January 2026 00:56:59 +0000 (0:00:00.580) 0:00:22.523 ******** 2026-01-05 00:58:53.374807 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-05 00:58:53.374815 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-05 00:58:53.374823 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-05 00:58:53.374831 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-05 00:58:53.374840 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-05 00:58:53.374849 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-05 00:58:53.374858 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-05 00:58:53.374866 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-05 00:58:53.374875 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-05 00:58:53.374884 | orchestrator | 2026-01-05 00:58:53.374979 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-05 00:58:53.375001 | orchestrator | Monday 05 January 2026 00:57:00 +0000 (0:00:00.935) 0:00:23.458 ******** 2026-01-05 00:58:53.375060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:58:53.375145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:58:53.375155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:58:53.375165 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.375173 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 00:58:53.375182 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 00:58:53.375190 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 00:58:53.375199 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.375209 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 00:58:53.375217 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 00:58:53.375227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 00:58:53.375236 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.375242 | orchestrator | 2026-01-05 00:58:53.375248 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-05 00:58:53.375253 | orchestrator | Monday 05 January 2026 00:57:01 +0000 (0:00:00.382) 0:00:23.841 ******** 2026-01-05 00:58:53.375260 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:58:53.375266 | orchestrator | 2026-01-05 00:58:53.375271 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 00:58:53.375279 | orchestrator | Monday 05 January 2026 00:57:01 +0000 (0:00:00.709) 0:00:24.550 ******** 2026-01-05 00:58:53.375306 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.375321 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.375327 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.375332 | orchestrator | 2026-01-05 00:58:53.375347 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 00:58:53.375353 | orchestrator | Monday 05 January 2026 00:57:02 +0000 (0:00:00.328) 0:00:24.879 ******** 2026-01-05 00:58:53.375374 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.375408 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.375415 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.375433 | orchestrator | 2026-01-05 00:58:53.375452 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 00:58:53.375470 | orchestrator | Monday 05 January 2026 00:57:02 +0000 (0:00:00.348) 0:00:25.227 ******** 2026-01-05 00:58:53.375511 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.375517 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.375530 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:58:53.375536 | orchestrator | 2026-01-05 00:58:53.375549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 00:58:53.375554 | orchestrator | Monday 05 January 2026 00:57:02 +0000 (0:00:00.333) 0:00:25.561 ******** 2026-01-05 00:58:53.375560 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.375573 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.375578 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.375591 | orchestrator | 2026-01-05 00:58:53.375597 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 00:58:53.375603 | orchestrator | Monday 05 January 2026 00:57:03 +0000 (0:00:00.676) 0:00:26.238 ******** 2026-01-05 00:58:53.375608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:58:53.375613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:58:53.375619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:58:53.375624 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.375630 | orchestrator | 2026-01-05 00:58:53.375635 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 00:58:53.375641 | orchestrator | Monday 05 January 2026 00:57:03 +0000 (0:00:00.413) 0:00:26.651 ******** 2026-01-05 00:58:53.375646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:58:53.375658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:58:53.375664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:58:53.375669 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.375675 | orchestrator | 2026-01-05 00:58:53.375680 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 00:58:53.375686 | orchestrator | Monday 05 January 2026 00:57:04 +0000 (0:00:00.377) 0:00:27.029 ******** 2026-01-05 00:58:53.375694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:58:53.375703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:58:53.375712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:58:53.375720 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.375732 | orchestrator | 2026-01-05 00:58:53.375744 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 00:58:53.375752 | orchestrator | Monday 05 January 2026 00:57:04 +0000 (0:00:00.417) 0:00:27.446 ******** 2026-01-05 00:58:53.375761 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:58:53.375770 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:58:53.375779 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:58:53.375787 | orchestrator | 2026-01-05 00:58:53.375796 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 00:58:53.375803 | orchestrator | Monday 05 January 2026 00:57:04 +0000 (0:00:00.338) 0:00:27.785 ******** 2026-01-05 00:58:53.375811 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 00:58:53.375819 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 00:58:53.375826 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 00:58:53.375834 | orchestrator | 2026-01-05 00:58:53.375842 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-05 00:58:53.375850 | orchestrator | Monday 05 January 2026 00:57:05 +0000 (0:00:00.559) 0:00:28.345 ******** 2026-01-05 00:58:53.375858 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:58:53.375867 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:58:53.375875 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:58:53.375883 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 00:58:53.375892 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 00:58:53.375900 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 00:58:53.375908 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 00:58:53.375916 | orchestrator | 2026-01-05 00:58:53.375925 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-05 00:58:53.375934 | orchestrator | Monday 05 January 2026 00:57:06 +0000 (0:00:01.021) 0:00:29.367 ******** 2026-01-05 00:58:53.375945 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 00:58:53.375954 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 00:58:53.375962 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 00:58:53.375970 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 00:58:53.375979 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 00:58:53.375988 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 00:58:53.376006 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 00:58:53.376015 | orchestrator | 2026-01-05 00:58:53.376024 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-05 00:58:53.376032 | orchestrator | Monday 05 January 2026 00:57:08 +0000 (0:00:02.104) 0:00:31.471 ******** 2026-01-05 00:58:53.376049 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:58:53.376059 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:58:53.376066 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-05 00:58:53.376092 | orchestrator | 2026-01-05 00:58:53.376098 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-05 00:58:53.376103 | orchestrator | Monday 05 January 2026 00:57:09 +0000 (0:00:00.401) 0:00:31.873 ******** 2026-01-05 00:58:53.376126 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:58:53.376142 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:58:53.376147 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:58:53.376169 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:58:53.376203 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 00:58:53.376209 | orchestrator | 2026-01-05 00:58:53.376229 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-05 00:58:53.376248 | orchestrator | Monday 05 January 2026 00:57:55 +0000 (0:00:46.692) 0:01:18.566 ******** 2026-01-05 00:58:53.376272 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376291 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376309 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376326 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376344 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376354 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376364 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-05 00:58:53.376372 | orchestrator | 2026-01-05 00:58:53.376382 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-05 00:58:53.376392 | orchestrator | Monday 05 January 2026 00:58:20 +0000 (0:00:24.428) 0:01:42.995 ******** 2026-01-05 00:58:53.376401 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376410 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376419 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376427 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376435 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376443 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376459 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 00:58:53.376468 | orchestrator | 2026-01-05 00:58:53.376476 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-05 00:58:53.376484 | orchestrator | Monday 05 January 2026 00:58:32 +0000 (0:00:12.636) 0:01:55.632 ******** 2026-01-05 00:58:53.376493 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376502 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:58:53.376512 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:58:53.376521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376531 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:58:53.376548 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:58:53.376559 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376566 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:58:53.376572 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:58:53.376581 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376590 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:58:53.376642 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:58:53.376655 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376664 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:58:53.376674 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:58:53.376683 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 00:58:53.376691 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 00:58:53.376700 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 00:58:53.376709 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-05 00:58:53.376718 | orchestrator | 2026-01-05 00:58:53.376727 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:58:53.376736 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-05 00:58:53.376747 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-05 00:58:53.376763 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-05 00:58:53.376773 | orchestrator | 2026-01-05 00:58:53.376781 | orchestrator | 2026-01-05 00:58:53.376790 | orchestrator | 2026-01-05 00:58:53.376798 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:58:53.376806 | orchestrator | Monday 05 January 2026 00:58:50 +0000 (0:00:17.666) 0:02:13.298 ******** 2026-01-05 00:58:53.376814 | orchestrator | =============================================================================== 2026-01-05 00:58:53.376823 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.69s 2026-01-05 00:58:53.376833 | orchestrator | generate keys ---------------------------------------------------------- 24.43s 2026-01-05 00:58:53.376842 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.67s 2026-01-05 00:58:53.376851 | orchestrator | get keys from monitors ------------------------------------------------- 12.64s 2026-01-05 00:58:53.376866 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.25s 2026-01-05 00:58:53.376884 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.22s 2026-01-05 00:58:53.376892 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.10s 2026-01-05 00:58:53.376900 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2026-01-05 00:58:53.376908 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.94s 2026-01-05 00:58:53.376916 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.89s 2026-01-05 00:58:53.376925 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.84s 2026-01-05 00:58:53.376933 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.83s 2026-01-05 00:58:53.376941 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.76s 2026-01-05 00:58:53.376950 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.76s 2026-01-05 00:58:53.376959 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2026-01-05 00:58:53.376967 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2026-01-05 00:58:53.376976 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.68s 2026-01-05 00:58:53.376984 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2026-01-05 00:58:53.376992 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.66s 2026-01-05 00:58:53.377001 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2026-01-05 00:58:53.377010 | orchestrator | 2026-01-05 00:58:53 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:53.377020 | orchestrator | 2026-01-05 00:58:53 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:53.377027 | orchestrator | 2026-01-05 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:56.433923 | orchestrator | 2026-01-05 00:58:56 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:58:56.435138 | orchestrator | 2026-01-05 00:58:56 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:56.436468 | orchestrator | 2026-01-05 00:58:56 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:56.436502 | orchestrator | 2026-01-05 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:59.481865 | orchestrator | 2026-01-05 00:58:59 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:58:59.483705 | orchestrator | 2026-01-05 00:58:59 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:58:59.487489 | orchestrator | 2026-01-05 00:58:59 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:58:59.487533 | orchestrator | 2026-01-05 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:02.534455 | orchestrator | 2026-01-05 00:59:02 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:02.535698 | orchestrator | 2026-01-05 00:59:02 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:02.536973 | orchestrator | 2026-01-05 00:59:02 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:02.537010 | orchestrator | 2026-01-05 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:05.577989 | orchestrator | 2026-01-05 00:59:05 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:05.578219 | orchestrator | 2026-01-05 00:59:05 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:05.578800 | orchestrator | 2026-01-05 00:59:05 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:05.578818 | orchestrator | 2026-01-05 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:08.628158 | orchestrator | 2026-01-05 00:59:08 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:08.630413 | orchestrator | 2026-01-05 00:59:08 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:08.632856 | orchestrator | 2026-01-05 00:59:08 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:08.632913 | orchestrator | 2026-01-05 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:11.677856 | orchestrator | 2026-01-05 00:59:11 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:11.680430 | orchestrator | 2026-01-05 00:59:11 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:11.682374 | orchestrator | 2026-01-05 00:59:11 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:11.682436 | orchestrator | 2026-01-05 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:14.733622 | orchestrator | 2026-01-05 00:59:14 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:14.735508 | orchestrator | 2026-01-05 00:59:14 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:14.735620 | orchestrator | 2026-01-05 00:59:14 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:14.735806 | orchestrator | 2026-01-05 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:17.789012 | orchestrator | 2026-01-05 00:59:17 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:17.790732 | orchestrator | 2026-01-05 00:59:17 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:17.792329 | orchestrator | 2026-01-05 00:59:17 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:17.792391 | orchestrator | 2026-01-05 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:20.835174 | orchestrator | 2026-01-05 00:59:20 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:20.837658 | orchestrator | 2026-01-05 00:59:20 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:20.839128 | orchestrator | 2026-01-05 00:59:20 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:20.839173 | orchestrator | 2026-01-05 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:23.890967 | orchestrator | 2026-01-05 00:59:23 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:23.892739 | orchestrator | 2026-01-05 00:59:23 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:23.894134 | orchestrator | 2026-01-05 00:59:23 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:23.894187 | orchestrator | 2026-01-05 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:26.951762 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:26.953618 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:26.954714 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:26.954777 | orchestrator | 2026-01-05 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:29.988509 | orchestrator | 2026-01-05 00:59:29 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state STARTED 2026-01-05 00:59:29.989186 | orchestrator | 2026-01-05 00:59:29 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:29.989599 | orchestrator | 2026-01-05 00:59:29 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:29.990154 | orchestrator | 2026-01-05 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:33.043699 | orchestrator | 2026-01-05 00:59:33 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 00:59:33.043804 | orchestrator | 2026-01-05 00:59:33 | INFO  | Task c6383925-d187-45a3-a63d-8cb38950eb8e is in state SUCCESS 2026-01-05 00:59:33.044988 | orchestrator | 2026-01-05 00:59:33 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:33.046171 | orchestrator | 2026-01-05 00:59:33 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:33.046218 | orchestrator | 2026-01-05 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:36.086452 | orchestrator | 2026-01-05 00:59:36 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 00:59:36.087915 | orchestrator | 2026-01-05 00:59:36 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:36.090803 | orchestrator | 2026-01-05 00:59:36 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:36.092724 | orchestrator | 2026-01-05 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:39.131713 | orchestrator | 2026-01-05 00:59:39 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 00:59:39.135467 | orchestrator | 2026-01-05 00:59:39 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:39.138952 | orchestrator | 2026-01-05 00:59:39 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state STARTED 2026-01-05 00:59:39.139077 | orchestrator | 2026-01-05 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:42.184788 | orchestrator | 2026-01-05 00:59:42 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 00:59:42.187801 | orchestrator | 2026-01-05 00:59:42 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:42.193160 | orchestrator | 2026-01-05 00:59:42.193228 | orchestrator | 2026-01-05 00:59:42.193239 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-05 00:59:42.193248 | orchestrator | 2026-01-05 00:59:42.193256 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-05 00:59:42.193264 | orchestrator | Monday 05 January 2026 00:58:55 +0000 (0:00:00.158) 0:00:00.158 ******** 2026-01-05 00:59:42.193272 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-05 00:59:42.193281 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.193288 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.193295 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 00:59:42.193302 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.193408 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-05 00:59:42.193664 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-05 00:59:42.193677 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-05 00:59:42.193684 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-05 00:59:42.193691 | orchestrator | 2026-01-05 00:59:42.193698 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-05 00:59:42.193706 | orchestrator | Monday 05 January 2026 00:59:00 +0000 (0:00:04.822) 0:00:04.981 ******** 2026-01-05 00:59:42.193713 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-05 00:59:42.193720 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.193727 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.193734 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 00:59:42.193753 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.193761 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-05 00:59:42.193768 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-05 00:59:42.193775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-05 00:59:42.193782 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-05 00:59:42.193789 | orchestrator | 2026-01-05 00:59:42.193796 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-05 00:59:42.193803 | orchestrator | Monday 05 January 2026 00:59:04 +0000 (0:00:04.245) 0:00:09.226 ******** 2026-01-05 00:59:42.193811 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:59:42.193819 | orchestrator | 2026-01-05 00:59:42.193827 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-05 00:59:42.193834 | orchestrator | Monday 05 January 2026 00:59:05 +0000 (0:00:01.070) 0:00:10.296 ******** 2026-01-05 00:59:42.193841 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-05 00:59:42.193849 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.193856 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.193863 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 00:59:42.193870 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.193877 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-05 00:59:42.193884 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-05 00:59:42.193892 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-05 00:59:42.193899 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-05 00:59:42.193906 | orchestrator | 2026-01-05 00:59:42.193913 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-05 00:59:42.193920 | orchestrator | Monday 05 January 2026 00:59:19 +0000 (0:00:14.018) 0:00:24.314 ******** 2026-01-05 00:59:42.193927 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-05 00:59:42.193934 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-05 00:59:42.193942 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-05 00:59:42.193976 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-05 00:59:42.194064 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-05 00:59:42.194075 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-05 00:59:42.194083 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-05 00:59:42.194090 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-05 00:59:42.194097 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-05 00:59:42.194104 | orchestrator | 2026-01-05 00:59:42.194111 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-05 00:59:42.194119 | orchestrator | Monday 05 January 2026 00:59:22 +0000 (0:00:03.105) 0:00:27.420 ******** 2026-01-05 00:59:42.194148 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-05 00:59:42.194156 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.194163 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.194170 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 00:59:42.194177 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 00:59:42.194184 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-05 00:59:42.194192 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-05 00:59:42.194199 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-05 00:59:42.194206 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-05 00:59:42.194213 | orchestrator | 2026-01-05 00:59:42.194220 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:59:42.194228 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:59:42.194237 | orchestrator | 2026-01-05 00:59:42.194244 | orchestrator | 2026-01-05 00:59:42.194252 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:59:42.194259 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:07.226) 0:00:34.647 ******** 2026-01-05 00:59:42.194271 | orchestrator | =============================================================================== 2026-01-05 00:59:42.194279 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.02s 2026-01-05 00:59:42.194287 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.23s 2026-01-05 00:59:42.194296 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.82s 2026-01-05 00:59:42.194305 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.25s 2026-01-05 00:59:42.194313 | orchestrator | Check if target directories exist --------------------------------------- 3.11s 2026-01-05 00:59:42.194321 | orchestrator | Create share directory -------------------------------------------------- 1.07s 2026-01-05 00:59:42.194329 | orchestrator | 2026-01-05 00:59:42.194337 | orchestrator | 2026-01-05 00:59:42.194346 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:59:42.194355 | orchestrator | 2026-01-05 00:59:42.194363 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:59:42.194372 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:00.288) 0:00:00.288 ******** 2026-01-05 00:59:42.194381 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.194390 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.194398 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.194407 | orchestrator | 2026-01-05 00:59:42.194422 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:59:42.194430 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:00.297) 0:00:00.585 ******** 2026-01-05 00:59:42.194438 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-05 00:59:42.194447 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-05 00:59:42.194456 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-05 00:59:42.194464 | orchestrator | 2026-01-05 00:59:42.194473 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-05 00:59:42.194481 | orchestrator | 2026-01-05 00:59:42.194489 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 00:59:42.194498 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:00.434) 0:00:01.020 ******** 2026-01-05 00:59:42.194506 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:42.194514 | orchestrator | 2026-01-05 00:59:42.194521 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-05 00:59:42.194528 | orchestrator | Monday 05 January 2026 00:57:49 +0000 (0:00:00.557) 0:00:01.578 ******** 2026-01-05 00:59:42.194552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:42.194580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:42.194609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:42.194618 | orchestrator | 2026-01-05 00:59:42.194625 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-05 00:59:42.194633 | orchestrator | Monday 05 January 2026 00:57:50 +0000 (0:00:01.195) 0:00:02.774 ******** 2026-01-05 00:59:42.194640 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.194653 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.194660 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.194668 | orchestrator | 2026-01-05 00:59:42.194675 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 00:59:42.194682 | orchestrator | Monday 05 January 2026 00:57:51 +0000 (0:00:00.496) 0:00:03.271 ******** 2026-01-05 00:59:42.194690 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 00:59:42.194697 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 00:59:42.194704 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 00:59:42.194712 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 00:59:42.194719 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 00:59:42.194726 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 00:59:42.194733 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-05 00:59:42.194740 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 00:59:42.194748 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 00:59:42.194755 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 00:59:42.194762 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 00:59:42.194769 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 00:59:42.194776 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 00:59:42.194783 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 00:59:42.194791 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-05 00:59:42.194798 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 00:59:42.194805 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 00:59:42.194812 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 00:59:42.194819 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 00:59:42.194827 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 00:59:42.194834 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 00:59:42.194845 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 00:59:42.194853 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-05 00:59:42.194860 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 00:59:42.194868 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-05 00:59:42.194878 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-05 00:59:42.194886 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-05 00:59:42.194893 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-05 00:59:42.194900 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-05 00:59:42.194912 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-05 00:59:42.194920 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-05 00:59:42.194927 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-05 00:59:42.194934 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-05 00:59:42.194946 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-05 00:59:42.194953 | orchestrator | 2026-01-05 00:59:42.194960 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.194968 | orchestrator | Monday 05 January 2026 00:57:51 +0000 (0:00:00.769) 0:00:04.041 ******** 2026-01-05 00:59:42.195050 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.195064 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.195076 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.195088 | orchestrator | 2026-01-05 00:59:42.195100 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.195110 | orchestrator | Monday 05 January 2026 00:57:52 +0000 (0:00:00.351) 0:00:04.392 ******** 2026-01-05 00:59:42.195118 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195125 | orchestrator | 2026-01-05 00:59:42.195133 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.195140 | orchestrator | Monday 05 January 2026 00:57:52 +0000 (0:00:00.130) 0:00:04.522 ******** 2026-01-05 00:59:42.195147 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195154 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.195162 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.195169 | orchestrator | 2026-01-05 00:59:42.195177 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.195185 | orchestrator | Monday 05 January 2026 00:57:52 +0000 (0:00:00.512) 0:00:05.035 ******** 2026-01-05 00:59:42.195193 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.195201 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.195209 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.195216 | orchestrator | 2026-01-05 00:59:42.195224 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.195232 | orchestrator | Monday 05 January 2026 00:57:53 +0000 (0:00:00.324) 0:00:05.359 ******** 2026-01-05 00:59:42.195239 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195247 | orchestrator | 2026-01-05 00:59:42.195255 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.195263 | orchestrator | Monday 05 January 2026 00:57:53 +0000 (0:00:00.154) 0:00:05.514 ******** 2026-01-05 00:59:42.195271 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195278 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.195286 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.195293 | orchestrator | 2026-01-05 00:59:42.195301 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.195309 | orchestrator | Monday 05 January 2026 00:57:53 +0000 (0:00:00.308) 0:00:05.823 ******** 2026-01-05 00:59:42.195317 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.195325 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.195332 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.195340 | orchestrator | 2026-01-05 00:59:42.195348 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.195356 | orchestrator | Monday 05 January 2026 00:57:54 +0000 (0:00:00.321) 0:00:06.145 ******** 2026-01-05 00:59:42.195365 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195387 | orchestrator | 2026-01-05 00:59:42.195400 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.195414 | orchestrator | Monday 05 January 2026 00:57:54 +0000 (0:00:00.342) 0:00:06.488 ******** 2026-01-05 00:59:42.195428 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195441 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.195454 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.195468 | orchestrator | 2026-01-05 00:59:42.195489 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.195503 | orchestrator | Monday 05 January 2026 00:57:54 +0000 (0:00:00.309) 0:00:06.798 ******** 2026-01-05 00:59:42.195512 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.195578 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.195587 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.195594 | orchestrator | 2026-01-05 00:59:42.195602 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.195616 | orchestrator | Monday 05 January 2026 00:57:55 +0000 (0:00:00.317) 0:00:07.115 ******** 2026-01-05 00:59:42.195630 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195644 | orchestrator | 2026-01-05 00:59:42.195657 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.195672 | orchestrator | Monday 05 January 2026 00:57:55 +0000 (0:00:00.151) 0:00:07.266 ******** 2026-01-05 00:59:42.195685 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195700 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.195716 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.195730 | orchestrator | 2026-01-05 00:59:42.195742 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.195750 | orchestrator | Monday 05 January 2026 00:57:55 +0000 (0:00:00.334) 0:00:07.600 ******** 2026-01-05 00:59:42.195758 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.195766 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.195774 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.195781 | orchestrator | 2026-01-05 00:59:42.195789 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.195797 | orchestrator | Monday 05 January 2026 00:57:56 +0000 (0:00:00.527) 0:00:08.128 ******** 2026-01-05 00:59:42.195805 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195812 | orchestrator | 2026-01-05 00:59:42.195820 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.195828 | orchestrator | Monday 05 January 2026 00:57:56 +0000 (0:00:00.143) 0:00:08.271 ******** 2026-01-05 00:59:42.195835 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195843 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.195852 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.195866 | orchestrator | 2026-01-05 00:59:42.195879 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.195892 | orchestrator | Monday 05 January 2026 00:57:56 +0000 (0:00:00.298) 0:00:08.569 ******** 2026-01-05 00:59:42.195906 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.195920 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.195928 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.195936 | orchestrator | 2026-01-05 00:59:42.195950 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.195958 | orchestrator | Monday 05 January 2026 00:57:56 +0000 (0:00:00.447) 0:00:09.017 ******** 2026-01-05 00:59:42.195966 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.195974 | orchestrator | 2026-01-05 00:59:42.196028 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.196037 | orchestrator | Monday 05 January 2026 00:57:57 +0000 (0:00:00.169) 0:00:09.186 ******** 2026-01-05 00:59:42.196045 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196052 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.196060 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.196078 | orchestrator | 2026-01-05 00:59:42.196086 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.196094 | orchestrator | Monday 05 January 2026 00:57:57 +0000 (0:00:00.286) 0:00:09.472 ******** 2026-01-05 00:59:42.196101 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.196109 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.196117 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.196125 | orchestrator | 2026-01-05 00:59:42.196133 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.196141 | orchestrator | Monday 05 January 2026 00:57:57 +0000 (0:00:00.536) 0:00:10.009 ******** 2026-01-05 00:59:42.196148 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196156 | orchestrator | 2026-01-05 00:59:42.196164 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.196171 | orchestrator | Monday 05 January 2026 00:57:58 +0000 (0:00:00.152) 0:00:10.162 ******** 2026-01-05 00:59:42.196240 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196250 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.196258 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.196266 | orchestrator | 2026-01-05 00:59:42.196274 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.196282 | orchestrator | Monday 05 January 2026 00:57:58 +0000 (0:00:00.331) 0:00:10.493 ******** 2026-01-05 00:59:42.196290 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.196298 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.196306 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.196313 | orchestrator | 2026-01-05 00:59:42.196321 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.196329 | orchestrator | Monday 05 January 2026 00:57:58 +0000 (0:00:00.366) 0:00:10.860 ******** 2026-01-05 00:59:42.196337 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196345 | orchestrator | 2026-01-05 00:59:42.196352 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.196360 | orchestrator | Monday 05 January 2026 00:57:58 +0000 (0:00:00.136) 0:00:10.996 ******** 2026-01-05 00:59:42.196368 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196376 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.196383 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.196391 | orchestrator | 2026-01-05 00:59:42.196399 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.196407 | orchestrator | Monday 05 January 2026 00:57:59 +0000 (0:00:00.365) 0:00:11.361 ******** 2026-01-05 00:59:42.196414 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.196422 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.196430 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.196438 | orchestrator | 2026-01-05 00:59:42.196446 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.196454 | orchestrator | Monday 05 January 2026 00:57:59 +0000 (0:00:00.571) 0:00:11.933 ******** 2026-01-05 00:59:42.196470 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196479 | orchestrator | 2026-01-05 00:59:42.196486 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.196494 | orchestrator | Monday 05 January 2026 00:58:00 +0000 (0:00:00.154) 0:00:12.088 ******** 2026-01-05 00:59:42.196502 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196510 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.196518 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.196525 | orchestrator | 2026-01-05 00:59:42.196533 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 00:59:42.196541 | orchestrator | Monday 05 January 2026 00:58:00 +0000 (0:00:00.330) 0:00:12.418 ******** 2026-01-05 00:59:42.196549 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:42.196557 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:42.196563 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:42.196570 | orchestrator | 2026-01-05 00:59:42.196577 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 00:59:42.196594 | orchestrator | Monday 05 January 2026 00:58:00 +0000 (0:00:00.358) 0:00:12.776 ******** 2026-01-05 00:59:42.196601 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196608 | orchestrator | 2026-01-05 00:59:42.196614 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 00:59:42.196621 | orchestrator | Monday 05 January 2026 00:58:00 +0000 (0:00:00.186) 0:00:12.963 ******** 2026-01-05 00:59:42.196627 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196634 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.196641 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.196647 | orchestrator | 2026-01-05 00:59:42.196654 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-05 00:59:42.196660 | orchestrator | Monday 05 January 2026 00:58:01 +0000 (0:00:00.519) 0:00:13.483 ******** 2026-01-05 00:59:42.196667 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:42.196674 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:42.196680 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:42.196687 | orchestrator | 2026-01-05 00:59:42.196694 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-05 00:59:42.196700 | orchestrator | Monday 05 January 2026 00:58:03 +0000 (0:00:01.824) 0:00:15.308 ******** 2026-01-05 00:59:42.196707 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 00:59:42.196714 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 00:59:42.196721 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 00:59:42.196727 | orchestrator | 2026-01-05 00:59:42.196738 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-05 00:59:42.196745 | orchestrator | Monday 05 January 2026 00:58:05 +0000 (0:00:02.027) 0:00:17.335 ******** 2026-01-05 00:59:42.196752 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 00:59:42.196759 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 00:59:42.196766 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 00:59:42.196773 | orchestrator | 2026-01-05 00:59:42.196779 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-05 00:59:42.196789 | orchestrator | Monday 05 January 2026 00:58:07 +0000 (0:00:02.257) 0:00:19.593 ******** 2026-01-05 00:59:42.196800 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 00:59:42.196810 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 00:59:42.196827 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 00:59:42.196840 | orchestrator | 2026-01-05 00:59:42.196851 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-05 00:59:42.196900 | orchestrator | Monday 05 January 2026 00:58:09 +0000 (0:00:02.062) 0:00:21.655 ******** 2026-01-05 00:59:42.196912 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.196921 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.196930 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.196943 | orchestrator | 2026-01-05 00:59:42.196952 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-05 00:59:42.196964 | orchestrator | Monday 05 January 2026 00:58:09 +0000 (0:00:00.318) 0:00:21.974 ******** 2026-01-05 00:59:42.196976 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.197005 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.197015 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.197025 | orchestrator | 2026-01-05 00:59:42.197035 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 00:59:42.197056 | orchestrator | Monday 05 January 2026 00:58:10 +0000 (0:00:00.287) 0:00:22.261 ******** 2026-01-05 00:59:42.197067 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:42.197078 | orchestrator | 2026-01-05 00:59:42.197088 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-05 00:59:42.197100 | orchestrator | Monday 05 January 2026 00:58:11 +0000 (0:00:00.843) 0:00:23.105 ******** 2026-01-05 00:59:42.197156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:42.197172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:42.197201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': 2026-01-05 00:59:42 | INFO  | Task 388af3b5-9077-40c4-8c21-6422f2971142 is in state SUCCESS 2026-01-05 00:59:42.197220 | orchestrator | True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:42.197233 | orchestrator | 2026-01-05 00:59:42.197244 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-05 00:59:42.197254 | orchestrator | Monday 05 January 2026 00:58:12 +0000 (0:00:01.629) 0:00:24.734 ******** 2026-01-05 00:59:42.197272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:59:42.197295 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.197313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:59:42.197324 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.197358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:59:42.197377 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.197388 | orchestrator | 2026-01-05 00:59:42.197398 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-05 00:59:42.197408 | orchestrator | Monday 05 January 2026 00:58:13 +0000 (0:00:00.744) 0:00:25.478 ******** 2026-01-05 00:59:42.197425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:59:42.197444 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.197464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:59:42.197476 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.197494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:59:42.197515 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.197526 | orchestrator | 2026-01-05 00:59:42.197537 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-05 00:59:42.197548 | orchestrator | Monday 05 January 2026 00:58:14 +0000 (0:00:00.845) 0:00:26.324 ******** 2026-01-05 00:59:42.197569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:42.197586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:42.197624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:42.197638 | orchestrator | 2026-01-05 00:59:42.197650 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 00:59:42.197662 | orchestrator | Monday 05 January 2026 00:58:15 +0000 (0:00:01.704) 0:00:28.029 ******** 2026-01-05 00:59:42.197675 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:42.197687 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:42.197706 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:42.197718 | orchestrator | 2026-01-05 00:59:42.197783 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 00:59:42.197797 | orchestrator | Monday 05 January 2026 00:58:16 +0000 (0:00:00.312) 0:00:28.342 ******** 2026-01-05 00:59:42.197810 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:42.197822 | orchestrator | 2026-01-05 00:59:42.197834 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-05 00:59:42.197844 | orchestrator | Monday 05 January 2026 00:58:16 +0000 (0:00:00.614) 0:00:28.956 ******** 2026-01-05 00:59:42.197855 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:42.197866 | orchestrator | 2026-01-05 00:59:42.197878 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-05 00:59:42.197890 | orchestrator | Monday 05 January 2026 00:58:19 +0000 (0:00:02.604) 0:00:31.560 ******** 2026-01-05 00:59:42.197902 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:42.197914 | orchestrator | 2026-01-05 00:59:42.197926 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-05 00:59:42.197938 | orchestrator | Monday 05 January 2026 00:58:22 +0000 (0:00:02.750) 0:00:34.310 ******** 2026-01-05 00:59:42.197950 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:42.197963 | orchestrator | 2026-01-05 00:59:42.197975 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 00:59:42.198088 | orchestrator | Monday 05 January 2026 00:58:38 +0000 (0:00:16.667) 0:00:50.978 ******** 2026-01-05 00:59:42.198100 | orchestrator | 2026-01-05 00:59:42.198112 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 00:59:42.198125 | orchestrator | Monday 05 January 2026 00:58:39 +0000 (0:00:00.084) 0:00:51.062 ******** 2026-01-05 00:59:42.198136 | orchestrator | 2026-01-05 00:59:42.198149 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 00:59:42.198162 | orchestrator | Monday 05 January 2026 00:58:39 +0000 (0:00:00.070) 0:00:51.133 ******** 2026-01-05 00:59:42.198173 | orchestrator | 2026-01-05 00:59:42.198184 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-05 00:59:42.198195 | orchestrator | Monday 05 January 2026 00:58:39 +0000 (0:00:00.073) 0:00:51.207 ******** 2026-01-05 00:59:42.198206 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:42.198218 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:42.198230 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:42.198242 | orchestrator | 2026-01-05 00:59:42.198254 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:59:42.198275 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 00:59:42.198289 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-05 00:59:42.198301 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-05 00:59:42.198313 | orchestrator | 2026-01-05 00:59:42.198324 | orchestrator | 2026-01-05 00:59:42.198335 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:59:42.198346 | orchestrator | Monday 05 January 2026 00:59:41 +0000 (0:01:01.863) 0:01:53.070 ******** 2026-01-05 00:59:42.198358 | orchestrator | =============================================================================== 2026-01-05 00:59:42.198369 | orchestrator | horizon : Restart horizon container ------------------------------------ 61.86s 2026-01-05 00:59:42.198381 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.67s 2026-01-05 00:59:42.198392 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.75s 2026-01-05 00:59:42.198404 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.60s 2026-01-05 00:59:42.198428 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.26s 2026-01-05 00:59:42.198441 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.06s 2026-01-05 00:59:42.198452 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.03s 2026-01-05 00:59:42.198464 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.82s 2026-01-05 00:59:42.198475 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.70s 2026-01-05 00:59:42.198487 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.63s 2026-01-05 00:59:42.198499 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2026-01-05 00:59:42.198512 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.85s 2026-01-05 00:59:42.198523 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.84s 2026-01-05 00:59:42.198533 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-01-05 00:59:42.198551 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.74s 2026-01-05 00:59:42.198562 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-01-05 00:59:42.198572 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-01-05 00:59:42.198583 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-01-05 00:59:42.198594 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-01-05 00:59:42.198605 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-01-05 00:59:42.198616 | orchestrator | 2026-01-05 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:45.226249 | orchestrator | 2026-01-05 00:59:45 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 00:59:45.232724 | orchestrator | 2026-01-05 00:59:45 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:45.232798 | orchestrator | 2026-01-05 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:48.277389 | orchestrator | 2026-01-05 00:59:48 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 00:59:48.278736 | orchestrator | 2026-01-05 00:59:48 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:48.278788 | orchestrator | 2026-01-05 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:51.322909 | orchestrator | 2026-01-05 00:59:51 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 00:59:51.325343 | orchestrator | 2026-01-05 00:59:51 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:51.325425 | orchestrator | 2026-01-05 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:54.367456 | orchestrator | 2026-01-05 00:59:54 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 00:59:54.370634 | orchestrator | 2026-01-05 00:59:54 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:54.370696 | orchestrator | 2026-01-05 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:57.419065 | orchestrator | 2026-01-05 00:59:57 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 00:59:57.420496 | orchestrator | 2026-01-05 00:59:57 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 00:59:57.420541 | orchestrator | 2026-01-05 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:00.459288 | orchestrator | 2026-01-05 01:00:00 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:00.460543 | orchestrator | 2026-01-05 01:00:00 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:00.460575 | orchestrator | 2026-01-05 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:03.507162 | orchestrator | 2026-01-05 01:00:03 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:03.508561 | orchestrator | 2026-01-05 01:00:03 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:03.508610 | orchestrator | 2026-01-05 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:06.551697 | orchestrator | 2026-01-05 01:00:06 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:06.553201 | orchestrator | 2026-01-05 01:00:06 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:06.553263 | orchestrator | 2026-01-05 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:09.602418 | orchestrator | 2026-01-05 01:00:09 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:09.604286 | orchestrator | 2026-01-05 01:00:09 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:09.604339 | orchestrator | 2026-01-05 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:12.649975 | orchestrator | 2026-01-05 01:00:12 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:12.652197 | orchestrator | 2026-01-05 01:00:12 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:12.652233 | orchestrator | 2026-01-05 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:15.692407 | orchestrator | 2026-01-05 01:00:15 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:15.694912 | orchestrator | 2026-01-05 01:00:15 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:15.695045 | orchestrator | 2026-01-05 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:18.745374 | orchestrator | 2026-01-05 01:00:18 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:18.747524 | orchestrator | 2026-01-05 01:00:18 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:18.747584 | orchestrator | 2026-01-05 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:21.792560 | orchestrator | 2026-01-05 01:00:21 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:21.794979 | orchestrator | 2026-01-05 01:00:21 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:21.795064 | orchestrator | 2026-01-05 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:24.844836 | orchestrator | 2026-01-05 01:00:24 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:24.846764 | orchestrator | 2026-01-05 01:00:24 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:24.846843 | orchestrator | 2026-01-05 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:27.891603 | orchestrator | 2026-01-05 01:00:27 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:27.894620 | orchestrator | 2026-01-05 01:00:27 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:27.894944 | orchestrator | 2026-01-05 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:30.943509 | orchestrator | 2026-01-05 01:00:30 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state STARTED 2026-01-05 01:00:30.944307 | orchestrator | 2026-01-05 01:00:30 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:30.944836 | orchestrator | 2026-01-05 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:33.990218 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task d6ec8e04-4a81-4866-990e-b11f1844c90b is in state SUCCESS 2026-01-05 01:00:33.990877 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:00:33.994181 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task c29dff8e-e92d-4f5d-bd44-00d8a8c8b8ab is in state STARTED 2026-01-05 01:00:33.994213 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:00:33.994218 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:33.994223 | orchestrator | 2026-01-05 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:37.035475 | orchestrator | 2026-01-05 01:00:37 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:00:37.036813 | orchestrator | 2026-01-05 01:00:37 | INFO  | Task c29dff8e-e92d-4f5d-bd44-00d8a8c8b8ab is in state STARTED 2026-01-05 01:00:37.038142 | orchestrator | 2026-01-05 01:00:37 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:00:37.039313 | orchestrator | 2026-01-05 01:00:37 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:37.039352 | orchestrator | 2026-01-05 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:40.085553 | orchestrator | 2026-01-05 01:00:40 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:00:40.087285 | orchestrator | 2026-01-05 01:00:40 | INFO  | Task c29dff8e-e92d-4f5d-bd44-00d8a8c8b8ab is in state SUCCESS 2026-01-05 01:00:40.088164 | orchestrator | 2026-01-05 01:00:40 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:00:40.090923 | orchestrator | 2026-01-05 01:00:40 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state STARTED 2026-01-05 01:00:40.091113 | orchestrator | 2026-01-05 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:43.139752 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:00:43.139842 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:00:43.139852 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task 47c429c6-a338-47b2-be18-b1c030f882d2 is in state SUCCESS 2026-01-05 01:00:43.140519 | orchestrator | 2026-01-05 01:00:43.140551 | orchestrator | 2026-01-05 01:00:43.140568 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-05 01:00:43.140572 | orchestrator | 2026-01-05 01:00:43.140577 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-05 01:00:43.140581 | orchestrator | Monday 05 January 2026 00:59:34 +0000 (0:00:00.272) 0:00:00.272 ******** 2026-01-05 01:00:43.140586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-05 01:00:43.140592 | orchestrator | 2026-01-05 01:00:43.140596 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-05 01:00:43.140600 | orchestrator | Monday 05 January 2026 00:59:34 +0000 (0:00:00.221) 0:00:00.494 ******** 2026-01-05 01:00:43.140664 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-05 01:00:43.140671 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-05 01:00:43.140675 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-05 01:00:43.140679 | orchestrator | 2026-01-05 01:00:43.140683 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-05 01:00:43.140687 | orchestrator | Monday 05 January 2026 00:59:36 +0000 (0:00:01.284) 0:00:01.778 ******** 2026-01-05 01:00:43.140691 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-05 01:00:43.140694 | orchestrator | 2026-01-05 01:00:43.140698 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-05 01:00:43.140703 | orchestrator | Monday 05 January 2026 00:59:37 +0000 (0:00:01.518) 0:00:03.297 ******** 2026-01-05 01:00:43.140707 | orchestrator | changed: [testbed-manager] 2026-01-05 01:00:43.140711 | orchestrator | 2026-01-05 01:00:43.140715 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-05 01:00:43.140718 | orchestrator | Monday 05 January 2026 00:59:38 +0000 (0:00:00.805) 0:00:04.102 ******** 2026-01-05 01:00:43.140722 | orchestrator | changed: [testbed-manager] 2026-01-05 01:00:43.140726 | orchestrator | 2026-01-05 01:00:43.140730 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-05 01:00:43.140733 | orchestrator | Monday 05 January 2026 00:59:39 +0000 (0:00:00.896) 0:00:04.999 ******** 2026-01-05 01:00:43.140737 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-05 01:00:43.140741 | orchestrator | ok: [testbed-manager] 2026-01-05 01:00:43.140745 | orchestrator | 2026-01-05 01:00:43.140749 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-05 01:00:43.140752 | orchestrator | Monday 05 January 2026 01:00:21 +0000 (0:00:42.687) 0:00:47.687 ******** 2026-01-05 01:00:43.140757 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-05 01:00:43.140763 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-05 01:00:43.140818 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-05 01:00:43.140824 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-05 01:00:43.140830 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-05 01:00:43.140836 | orchestrator | 2026-01-05 01:00:43.140842 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-05 01:00:43.140848 | orchestrator | Monday 05 January 2026 01:00:26 +0000 (0:00:04.345) 0:00:52.032 ******** 2026-01-05 01:00:43.140855 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-05 01:00:43.140860 | orchestrator | 2026-01-05 01:00:43.140866 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-05 01:00:43.140946 | orchestrator | Monday 05 January 2026 01:00:26 +0000 (0:00:00.476) 0:00:52.509 ******** 2026-01-05 01:00:43.140953 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:00:43.140958 | orchestrator | 2026-01-05 01:00:43.140964 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-05 01:00:43.140970 | orchestrator | Monday 05 January 2026 01:00:26 +0000 (0:00:00.139) 0:00:52.649 ******** 2026-01-05 01:00:43.140976 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:00:43.140981 | orchestrator | 2026-01-05 01:00:43.140988 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-05 01:00:43.140995 | orchestrator | Monday 05 January 2026 01:00:27 +0000 (0:00:00.502) 0:00:53.151 ******** 2026-01-05 01:00:43.141001 | orchestrator | changed: [testbed-manager] 2026-01-05 01:00:43.141007 | orchestrator | 2026-01-05 01:00:43.141013 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-05 01:00:43.141019 | orchestrator | Monday 05 January 2026 01:00:29 +0000 (0:00:01.711) 0:00:54.863 ******** 2026-01-05 01:00:43.141025 | orchestrator | changed: [testbed-manager] 2026-01-05 01:00:43.141032 | orchestrator | 2026-01-05 01:00:43.141038 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-05 01:00:43.141054 | orchestrator | Monday 05 January 2026 01:00:29 +0000 (0:00:00.826) 0:00:55.689 ******** 2026-01-05 01:00:43.141061 | orchestrator | changed: [testbed-manager] 2026-01-05 01:00:43.141068 | orchestrator | 2026-01-05 01:00:43.141074 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-05 01:00:43.141383 | orchestrator | Monday 05 January 2026 01:00:30 +0000 (0:00:00.624) 0:00:56.314 ******** 2026-01-05 01:00:43.141400 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-05 01:00:43.141408 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-05 01:00:43.141415 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-05 01:00:43.141422 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-05 01:00:43.141429 | orchestrator | 2026-01-05 01:00:43.141436 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:00:43.141445 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:00:43.141453 | orchestrator | 2026-01-05 01:00:43.141460 | orchestrator | 2026-01-05 01:00:43.141492 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:00:43.141507 | orchestrator | Monday 05 January 2026 01:00:32 +0000 (0:00:01.682) 0:00:57.996 ******** 2026-01-05 01:00:43.141515 | orchestrator | =============================================================================== 2026-01-05 01:00:43.141522 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.69s 2026-01-05 01:00:43.141528 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.35s 2026-01-05 01:00:43.141534 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.71s 2026-01-05 01:00:43.141541 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.68s 2026-01-05 01:00:43.141547 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.52s 2026-01-05 01:00:43.141553 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.28s 2026-01-05 01:00:43.141559 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2026-01-05 01:00:43.141565 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.83s 2026-01-05 01:00:43.141571 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.81s 2026-01-05 01:00:43.141578 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-01-05 01:00:43.141584 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.50s 2026-01-05 01:00:43.141590 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-01-05 01:00:43.141596 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-01-05 01:00:43.141603 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-01-05 01:00:43.141610 | orchestrator | 2026-01-05 01:00:43.141616 | orchestrator | 2026-01-05 01:00:43.141622 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:00:43.141628 | orchestrator | 2026-01-05 01:00:43.141634 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:00:43.141640 | orchestrator | Monday 05 January 2026 01:00:37 +0000 (0:00:00.203) 0:00:00.203 ******** 2026-01-05 01:00:43.141646 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:43.141653 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:43.141659 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:43.141665 | orchestrator | 2026-01-05 01:00:43.141672 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:00:43.141678 | orchestrator | Monday 05 January 2026 01:00:37 +0000 (0:00:00.307) 0:00:00.510 ******** 2026-01-05 01:00:43.141685 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-05 01:00:43.141692 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-05 01:00:43.141706 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-05 01:00:43.141712 | orchestrator | 2026-01-05 01:00:43.141718 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-05 01:00:43.141725 | orchestrator | 2026-01-05 01:00:43.141731 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-05 01:00:43.141737 | orchestrator | Monday 05 January 2026 01:00:38 +0000 (0:00:00.692) 0:00:01.203 ******** 2026-01-05 01:00:43.141743 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:43.141749 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:43.141756 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:43.141762 | orchestrator | 2026-01-05 01:00:43.141768 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:00:43.141776 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:00:43.141783 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:00:43.141790 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:00:43.141796 | orchestrator | 2026-01-05 01:00:43.141802 | orchestrator | 2026-01-05 01:00:43.141809 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:00:43.141815 | orchestrator | Monday 05 January 2026 01:00:39 +0000 (0:00:00.692) 0:00:01.896 ******** 2026-01-05 01:00:43.141821 | orchestrator | =============================================================================== 2026-01-05 01:00:43.141827 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.69s 2026-01-05 01:00:43.141833 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-01-05 01:00:43.141839 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-01-05 01:00:43.141846 | orchestrator | 2026-01-05 01:00:43.141852 | orchestrator | 2026-01-05 01:00:43.141858 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:00:43.141864 | orchestrator | 2026-01-05 01:00:43.141889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:00:43.141896 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:00.267) 0:00:00.267 ******** 2026-01-05 01:00:43.141902 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:43.141908 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:43.141914 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:43.141920 | orchestrator | 2026-01-05 01:00:43.141925 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:00:43.141931 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:00.310) 0:00:00.578 ******** 2026-01-05 01:00:43.141938 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-05 01:00:43.141944 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-05 01:00:43.141951 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-05 01:00:43.141957 | orchestrator | 2026-01-05 01:00:43.141963 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-05 01:00:43.141969 | orchestrator | 2026-01-05 01:00:43.141994 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:43.142007 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:00.453) 0:00:01.032 ******** 2026-01-05 01:00:43.142046 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:00:43.142056 | orchestrator | 2026-01-05 01:00:43.142063 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-05 01:00:43.142070 | orchestrator | Monday 05 January 2026 00:57:49 +0000 (0:00:00.585) 0:00:01.617 ******** 2026-01-05 01:00:43.142083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142212 | orchestrator | 2026-01-05 01:00:43.142218 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-05 01:00:43.142225 | orchestrator | Monday 05 January 2026 00:57:51 +0000 (0:00:01.869) 0:00:03.486 ******** 2026-01-05 01:00:43.142232 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.142238 | orchestrator | 2026-01-05 01:00:43.142244 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-05 01:00:43.142250 | orchestrator | Monday 05 January 2026 00:57:51 +0000 (0:00:00.135) 0:00:03.622 ******** 2026-01-05 01:00:43.142256 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.142262 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.142269 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.142275 | orchestrator | 2026-01-05 01:00:43.142281 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-05 01:00:43.142287 | orchestrator | Monday 05 January 2026 00:57:51 +0000 (0:00:00.443) 0:00:04.065 ******** 2026-01-05 01:00:43.142294 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:00:43.142301 | orchestrator | 2026-01-05 01:00:43.142307 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:43.142319 | orchestrator | Monday 05 January 2026 00:57:52 +0000 (0:00:00.858) 0:00:04.924 ******** 2026-01-05 01:00:43.142330 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:00:43.142336 | orchestrator | 2026-01-05 01:00:43.142346 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-05 01:00:43.142352 | orchestrator | Monday 05 January 2026 00:57:53 +0000 (0:00:00.571) 0:00:05.495 ******** 2026-01-05 01:00:43.142359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142434 | orchestrator | 2026-01-05 01:00:43.142440 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-05 01:00:43.142446 | orchestrator | Monday 05 January 2026 00:57:56 +0000 (0:00:03.413) 0:00:08.909 ******** 2026-01-05 01:00:43.142466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:00:43.142478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.142486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:43.142492 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.142500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:00:43.142506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.142512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:43.142523 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.142538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:00:43.142545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.142551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:43.142558 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.142564 | orchestrator | 2026-01-05 01:00:43.142571 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-05 01:00:43.142578 | orchestrator | Monday 05 January 2026 00:57:57 +0000 (0:00:00.653) 0:00:09.562 ******** 2026-01-05 01:00:43.142584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:00:43.142597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.142642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:43.142649 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.142656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:00:43.142663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.142670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:43.142677 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.142683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:00:43.142703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.142709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:43.142716 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.142722 | orchestrator | 2026-01-05 01:00:43.142728 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-05 01:00:43.142735 | orchestrator | Monday 05 January 2026 00:57:58 +0000 (0:00:00.759) 0:00:10.322 ******** 2026-01-05 01:00:43.142741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142818 | orchestrator | 2026-01-05 01:00:43.142824 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-05 01:00:43.142830 | orchestrator | Monday 05 January 2026 00:58:01 +0000 (0:00:03.428) 0:00:13.751 ******** 2026-01-05 01:00:43.142846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.142860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.142912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.142921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.142927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.142952 | orchestrator | 2026-01-05 01:00:43.142960 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-05 01:00:43.142966 | orchestrator | Monday 05 January 2026 00:58:07 +0000 (0:00:05.708) 0:00:19.460 ******** 2026-01-05 01:00:43.142973 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:43.142980 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:00:43.142986 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:00:43.142994 | orchestrator | 2026-01-05 01:00:43.143002 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-05 01:00:43.143009 | orchestrator | Monday 05 January 2026 00:58:08 +0000 (0:00:01.488) 0:00:20.948 ******** 2026-01-05 01:00:43.143015 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.143020 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.143027 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.143034 | orchestrator | 2026-01-05 01:00:43.143042 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-05 01:00:43.143049 | orchestrator | Monday 05 January 2026 00:58:09 +0000 (0:00:00.538) 0:00:21.486 ******** 2026-01-05 01:00:43.143057 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.143064 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.143072 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.143079 | orchestrator | 2026-01-05 01:00:43.143086 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-05 01:00:43.143093 | orchestrator | Monday 05 January 2026 00:58:09 +0000 (0:00:00.312) 0:00:21.799 ******** 2026-01-05 01:00:43.143100 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.143107 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.143114 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.143121 | orchestrator | 2026-01-05 01:00:43.143129 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-05 01:00:43.143136 | orchestrator | Monday 05 January 2026 00:58:10 +0000 (0:00:00.495) 0:00:22.294 ******** 2026-01-05 01:00:43.143156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:00:43.143164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.143176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:43.143183 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.143190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:00:43.143198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.143213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:43.143220 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.143227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:00:43.143244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:00:43.143251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:00:43.143258 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.143265 | orchestrator | 2026-01-05 01:00:43.143271 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:43.143277 | orchestrator | Monday 05 January 2026 00:58:10 +0000 (0:00:00.659) 0:00:22.953 ******** 2026-01-05 01:00:43.143283 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.143289 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.143296 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.143302 | orchestrator | 2026-01-05 01:00:43.143308 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-05 01:00:43.143314 | orchestrator | Monday 05 January 2026 00:58:11 +0000 (0:00:00.306) 0:00:23.260 ******** 2026-01-05 01:00:43.143321 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:00:43.143328 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:00:43.143334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:00:43.143341 | orchestrator | 2026-01-05 01:00:43.143348 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-05 01:00:43.143354 | orchestrator | Monday 05 January 2026 00:58:12 +0000 (0:00:01.698) 0:00:24.959 ******** 2026-01-05 01:00:43.143361 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:00:43.143368 | orchestrator | 2026-01-05 01:00:43.143374 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-05 01:00:43.143381 | orchestrator | Monday 05 January 2026 00:58:13 +0000 (0:00:00.963) 0:00:25.923 ******** 2026-01-05 01:00:43.143387 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.143394 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.143400 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.143406 | orchestrator | 2026-01-05 01:00:43.143412 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-05 01:00:43.143418 | orchestrator | Monday 05 January 2026 00:58:14 +0000 (0:00:00.795) 0:00:26.719 ******** 2026-01-05 01:00:43.143430 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 01:00:43.143441 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:00:43.143455 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 01:00:43.143462 | orchestrator | 2026-01-05 01:00:43.143468 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-05 01:00:43.143475 | orchestrator | Monday 05 January 2026 00:58:15 +0000 (0:00:01.325) 0:00:28.045 ******** 2026-01-05 01:00:43.143482 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:43.143489 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:43.143496 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:43.143502 | orchestrator | 2026-01-05 01:00:43.143509 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-05 01:00:43.143515 | orchestrator | Monday 05 January 2026 00:58:16 +0000 (0:00:00.337) 0:00:28.382 ******** 2026-01-05 01:00:43.143521 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:00:43.143527 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:00:43.143534 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:00:43.143540 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:00:43.143547 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:00:43.143553 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:00:43.143560 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:00:43.143567 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:00:43.143573 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:00:43.143579 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:00:43.143586 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:00:43.143593 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:00:43.143601 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:00:43.143607 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:00:43.143614 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:00:43.143621 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:00:43.143627 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:00:43.143634 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:00:43.143640 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:00:43.143647 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:00:43.143653 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:00:43.143660 | orchestrator | 2026-01-05 01:00:43.143667 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-05 01:00:43.143674 | orchestrator | Monday 05 January 2026 00:58:25 +0000 (0:00:08.777) 0:00:37.160 ******** 2026-01-05 01:00:43.143681 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:00:43.143688 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:00:43.143694 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:00:43.143707 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:00:43.143713 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:00:43.143720 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:00:43.143727 | orchestrator | 2026-01-05 01:00:43.143733 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-05 01:00:43.143740 | orchestrator | Monday 05 January 2026 00:58:28 +0000 (0:00:03.120) 0:00:40.281 ******** 2026-01-05 01:00:43.143758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.143767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.143775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:00:43.143782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.143794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.143809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:00:43.143817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.143824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.143832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:00:43.143839 | orchestrator | 2026-01-05 01:00:43.143846 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:43.143853 | orchestrator | Monday 05 January 2026 00:58:30 +0000 (0:00:02.236) 0:00:42.518 ******** 2026-01-05 01:00:43.143859 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.143865 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.143901 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.143913 | orchestrator | 2026-01-05 01:00:43.143920 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-05 01:00:43.143926 | orchestrator | Monday 05 January 2026 00:58:30 +0000 (0:00:00.310) 0:00:42.828 ******** 2026-01-05 01:00:43.143933 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:43.143939 | orchestrator | 2026-01-05 01:00:43.143945 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-05 01:00:43.143952 | orchestrator | Monday 05 January 2026 00:58:32 +0000 (0:00:02.170) 0:00:44.999 ******** 2026-01-05 01:00:43.143958 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:43.143964 | orchestrator | 2026-01-05 01:00:43.143970 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-05 01:00:43.143977 | orchestrator | Monday 05 January 2026 00:58:35 +0000 (0:00:02.374) 0:00:47.374 ******** 2026-01-05 01:00:43.143983 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:43.143990 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:43.143996 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:43.144003 | orchestrator | 2026-01-05 01:00:43.144009 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-05 01:00:43.144016 | orchestrator | Monday 05 January 2026 00:58:36 +0000 (0:00:01.010) 0:00:48.384 ******** 2026-01-05 01:00:43.144022 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:43.144028 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:43.144034 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:43.144041 | orchestrator | 2026-01-05 01:00:43.144047 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-05 01:00:43.144053 | orchestrator | Monday 05 January 2026 00:58:36 +0000 (0:00:00.346) 0:00:48.730 ******** 2026-01-05 01:00:43.144059 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.144066 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.144073 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.144079 | orchestrator | 2026-01-05 01:00:43.144086 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-05 01:00:43.144092 | orchestrator | Monday 05 January 2026 00:58:37 +0000 (0:00:00.362) 0:00:49.093 ******** 2026-01-05 01:00:43.144098 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:43.144104 | orchestrator | 2026-01-05 01:00:43.144110 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-05 01:00:43.144117 | orchestrator | Monday 05 January 2026 00:58:52 +0000 (0:00:15.890) 0:01:04.983 ******** 2026-01-05 01:00:43.144123 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:43.144129 | orchestrator | 2026-01-05 01:00:43.144139 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 01:00:43.144149 | orchestrator | Monday 05 January 2026 00:59:03 +0000 (0:00:10.951) 0:01:15.934 ******** 2026-01-05 01:00:43.144156 | orchestrator | 2026-01-05 01:00:43.144162 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 01:00:43.144169 | orchestrator | Monday 05 January 2026 00:59:03 +0000 (0:00:00.069) 0:01:16.004 ******** 2026-01-05 01:00:43.144175 | orchestrator | 2026-01-05 01:00:43.144182 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 01:00:43.144188 | orchestrator | Monday 05 January 2026 00:59:04 +0000 (0:00:00.067) 0:01:16.072 ******** 2026-01-05 01:00:43.144194 | orchestrator | 2026-01-05 01:00:43.144200 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-05 01:00:43.144207 | orchestrator | Monday 05 January 2026 00:59:04 +0000 (0:00:00.130) 0:01:16.202 ******** 2026-01-05 01:00:43.144221 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:43.144227 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:00:43.144233 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:00:43.144377 | orchestrator | 2026-01-05 01:00:43.144386 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-05 01:00:43.144393 | orchestrator | Monday 05 January 2026 00:59:27 +0000 (0:00:23.331) 0:01:39.533 ******** 2026-01-05 01:00:43.144400 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:43.144412 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:00:43.144430 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:00:43.144438 | orchestrator | 2026-01-05 01:00:43.144445 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-05 01:00:43.144452 | orchestrator | Monday 05 January 2026 00:59:37 +0000 (0:00:09.887) 0:01:49.421 ******** 2026-01-05 01:00:43.144458 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:43.144465 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:00:43.144472 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:00:43.144479 | orchestrator | 2026-01-05 01:00:43.144486 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:43.144493 | orchestrator | Monday 05 January 2026 00:59:49 +0000 (0:00:12.345) 0:02:01.767 ******** 2026-01-05 01:00:43.144499 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:00:43.144506 | orchestrator | 2026-01-05 01:00:43.144514 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-05 01:00:43.144521 | orchestrator | Monday 05 January 2026 00:59:50 +0000 (0:00:00.779) 0:02:02.546 ******** 2026-01-05 01:00:43.144528 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:00:43.144535 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:00:43.144541 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:43.144548 | orchestrator | 2026-01-05 01:00:43.144555 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-05 01:00:43.144562 | orchestrator | Monday 05 January 2026 00:59:51 +0000 (0:00:00.777) 0:02:03.324 ******** 2026-01-05 01:00:43.144569 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:00:43.144575 | orchestrator | 2026-01-05 01:00:43.144583 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-05 01:00:43.144590 | orchestrator | Monday 05 January 2026 00:59:53 +0000 (0:00:01.786) 0:02:05.110 ******** 2026-01-05 01:00:43.144597 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-05 01:00:43.144604 | orchestrator | 2026-01-05 01:00:43.144611 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-05 01:00:43.144618 | orchestrator | Monday 05 January 2026 01:00:04 +0000 (0:00:11.876) 0:02:16.987 ******** 2026-01-05 01:00:43.144625 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-05 01:00:43.144632 | orchestrator | 2026-01-05 01:00:43.144639 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-05 01:00:43.144645 | orchestrator | Monday 05 January 2026 01:00:28 +0000 (0:00:23.328) 0:02:40.315 ******** 2026-01-05 01:00:43.144652 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-05 01:00:43.144659 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-05 01:00:43.144666 | orchestrator | 2026-01-05 01:00:43.144673 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-05 01:00:43.144680 | orchestrator | Monday 05 January 2026 01:00:34 +0000 (0:00:06.520) 0:02:46.836 ******** 2026-01-05 01:00:43.144687 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.144695 | orchestrator | 2026-01-05 01:00:43.144702 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-05 01:00:43.144708 | orchestrator | Monday 05 January 2026 01:00:34 +0000 (0:00:00.122) 0:02:46.959 ******** 2026-01-05 01:00:43.144715 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.144721 | orchestrator | 2026-01-05 01:00:43.144728 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-05 01:00:43.144735 | orchestrator | Monday 05 January 2026 01:00:35 +0000 (0:00:00.132) 0:02:47.091 ******** 2026-01-05 01:00:43.144741 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.144748 | orchestrator | 2026-01-05 01:00:43.144755 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-05 01:00:43.144762 | orchestrator | Monday 05 January 2026 01:00:35 +0000 (0:00:00.142) 0:02:47.234 ******** 2026-01-05 01:00:43.144776 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.144783 | orchestrator | 2026-01-05 01:00:43.144790 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-05 01:00:43.144797 | orchestrator | Monday 05 January 2026 01:00:35 +0000 (0:00:00.723) 0:02:47.958 ******** 2026-01-05 01:00:43.144804 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:00:43.144810 | orchestrator | 2026-01-05 01:00:43.144817 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:00:43.144824 | orchestrator | Monday 05 January 2026 01:00:39 +0000 (0:00:03.251) 0:02:51.209 ******** 2026-01-05 01:00:43.144830 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:00:43.144843 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:00:43.144850 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:00:43.144858 | orchestrator | 2026-01-05 01:00:43.144884 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:00:43.144893 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 01:00:43.144901 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:00:43.144908 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:00:43.144914 | orchestrator | 2026-01-05 01:00:43.144921 | orchestrator | 2026-01-05 01:00:43.144928 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:00:43.144935 | orchestrator | Monday 05 January 2026 01:00:39 +0000 (0:00:00.585) 0:02:51.795 ******** 2026-01-05 01:00:43.144941 | orchestrator | =============================================================================== 2026-01-05 01:00:43.144948 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 23.33s 2026-01-05 01:00:43.144955 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.33s 2026-01-05 01:00:43.144961 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.89s 2026-01-05 01:00:43.144968 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.35s 2026-01-05 01:00:43.144974 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.88s 2026-01-05 01:00:43.144981 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.95s 2026-01-05 01:00:43.144987 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.89s 2026-01-05 01:00:43.144994 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.78s 2026-01-05 01:00:43.145001 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.52s 2026-01-05 01:00:43.145007 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.71s 2026-01-05 01:00:43.145014 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.43s 2026-01-05 01:00:43.145021 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.41s 2026-01-05 01:00:43.145028 | orchestrator | keystone : Creating default user role ----------------------------------- 3.25s 2026-01-05 01:00:43.145036 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.12s 2026-01-05 01:00:43.145043 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.38s 2026-01-05 01:00:43.145050 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.24s 2026-01-05 01:00:43.145057 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.17s 2026-01-05 01:00:43.145064 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.87s 2026-01-05 01:00:43.145071 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.79s 2026-01-05 01:00:43.145078 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.70s 2026-01-05 01:00:43.145093 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:00:43.151899 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:00:43.153398 | orchestrator | 2026-01-05 01:00:43 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:00:43.154067 | orchestrator | 2026-01-05 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:46.188763 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:00:46.189069 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:00:46.192976 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:00:46.193764 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:00:46.194787 | orchestrator | 2026-01-05 01:00:46 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:00:46.194825 | orchestrator | 2026-01-05 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:49.232180 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:00:49.235174 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:00:49.236219 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:00:49.237818 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:00:49.239985 | orchestrator | 2026-01-05 01:00:49 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:00:49.240015 | orchestrator | 2026-01-05 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:52.286186 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:00:52.291026 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:00:52.294047 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:00:52.297096 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:00:52.300086 | orchestrator | 2026-01-05 01:00:52 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:00:52.300361 | orchestrator | 2026-01-05 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:55.351984 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:00:55.356451 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:00:55.360770 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:00:55.363988 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:00:55.366074 | orchestrator | 2026-01-05 01:00:55 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:00:55.366148 | orchestrator | 2026-01-05 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:58.442218 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:00:58.442549 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:00:58.443505 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:00:58.444196 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:00:58.445477 | orchestrator | 2026-01-05 01:00:58 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:00:58.445512 | orchestrator | 2026-01-05 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:01.501713 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:01.504898 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:01.507206 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:01.509371 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:01:01.510707 | orchestrator | 2026-01-05 01:01:01 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:01.510739 | orchestrator | 2026-01-05 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:04.559914 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:04.561629 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:04.564327 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:04.567600 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:01:04.570774 | orchestrator | 2026-01-05 01:01:04 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:04.570863 | orchestrator | 2026-01-05 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:07.642266 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:07.642379 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:07.642391 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:07.642399 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:01:07.642424 | orchestrator | 2026-01-05 01:01:07 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:07.642433 | orchestrator | 2026-01-05 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:10.668905 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:10.669337 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:10.670104 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:10.671405 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:01:10.673075 | orchestrator | 2026-01-05 01:01:10 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:10.673120 | orchestrator | 2026-01-05 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:13.701311 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:13.702597 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:13.703045 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:13.703520 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:01:13.705034 | orchestrator | 2026-01-05 01:01:13 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:13.705071 | orchestrator | 2026-01-05 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:16.761944 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:16.763198 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:16.763394 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:16.764061 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:01:16.765516 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:16.765560 | orchestrator | 2026-01-05 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:20.015972 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:20.016095 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:20.016116 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:20.016133 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:01:20.016150 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:20.016170 | orchestrator | 2026-01-05 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:22.818348 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:22.818419 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:22.818431 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:22.818948 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state STARTED 2026-01-05 01:01:22.819542 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:22.819615 | orchestrator | 2026-01-05 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:25.841315 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:25.841479 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:25.841960 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:25.842432 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task 2cbd2bf6-929f-4fdf-a4e5-4e74fd7819a8 is in state SUCCESS 2026-01-05 01:01:25.843047 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:25.843071 | orchestrator | 2026-01-05 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:28.893333 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:28.893425 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:28.893435 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:28.893442 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:28.893449 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:28.893456 | orchestrator | 2026-01-05 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:31.929213 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:31.929307 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:31.929317 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:31.929324 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:31.929330 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:31.929337 | orchestrator | 2026-01-05 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:34.959172 | orchestrator | 2026-01-05 01:01:34 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:34.959292 | orchestrator | 2026-01-05 01:01:34 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:34.959303 | orchestrator | 2026-01-05 01:01:34 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:34.964489 | orchestrator | 2026-01-05 01:01:34 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:34.964650 | orchestrator | 2026-01-05 01:01:34 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:34.964666 | orchestrator | 2026-01-05 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:37.996962 | orchestrator | 2026-01-05 01:01:37 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:37.998008 | orchestrator | 2026-01-05 01:01:37 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:37.998689 | orchestrator | 2026-01-05 01:01:37 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:37.999656 | orchestrator | 2026-01-05 01:01:37 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:38.000856 | orchestrator | 2026-01-05 01:01:38 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:38.001019 | orchestrator | 2026-01-05 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:41.037286 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:41.038224 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:41.039826 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:41.043582 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:41.043662 | orchestrator | 2026-01-05 01:01:41 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:41.043671 | orchestrator | 2026-01-05 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:44.078449 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:44.078873 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:44.081096 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:44.081656 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:44.082662 | orchestrator | 2026-01-05 01:01:44 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:44.082715 | orchestrator | 2026-01-05 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:47.122346 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:47.122424 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:47.123286 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:47.123879 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:47.124691 | orchestrator | 2026-01-05 01:01:47 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:47.124717 | orchestrator | 2026-01-05 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:50.146428 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state STARTED 2026-01-05 01:01:50.146646 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:50.147389 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:50.148117 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:50.148861 | orchestrator | 2026-01-05 01:01:50 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:50.148896 | orchestrator | 2026-01-05 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:53.180533 | orchestrator | 2026-01-05 01:01:53.180664 | orchestrator | 2026-01-05 01:01:53.180682 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:01:53.180719 | orchestrator | 2026-01-05 01:01:53.180747 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:01:53.180839 | orchestrator | Monday 05 January 2026 01:00:46 +0000 (0:00:00.349) 0:00:00.349 ******** 2026-01-05 01:01:53.180860 | orchestrator | ok: [testbed-manager] 2026-01-05 01:01:53.180879 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:53.180890 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:53.180899 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:53.180909 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:53.180948 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:53.180958 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:53.180968 | orchestrator | 2026-01-05 01:01:53.180978 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:01:53.180988 | orchestrator | Monday 05 January 2026 01:00:47 +0000 (0:00:01.029) 0:00:01.379 ******** 2026-01-05 01:01:53.180998 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-05 01:01:53.181008 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-05 01:01:53.181018 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-05 01:01:53.181028 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-05 01:01:53.181037 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-05 01:01:53.181047 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-05 01:01:53.181056 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-05 01:01:53.181068 | orchestrator | 2026-01-05 01:01:53.181080 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-05 01:01:53.181091 | orchestrator | 2026-01-05 01:01:53.181102 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-05 01:01:53.181113 | orchestrator | Monday 05 January 2026 01:00:49 +0000 (0:00:01.672) 0:00:03.051 ******** 2026-01-05 01:01:53.181126 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:53.181139 | orchestrator | 2026-01-05 01:01:53.181151 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-05 01:01:53.181160 | orchestrator | Monday 05 January 2026 01:00:51 +0000 (0:00:01.972) 0:00:05.024 ******** 2026-01-05 01:01:53.181170 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-05 01:01:53.181179 | orchestrator | 2026-01-05 01:01:53.181189 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-05 01:01:53.181198 | orchestrator | Monday 05 January 2026 01:00:54 +0000 (0:00:03.474) 0:00:08.498 ******** 2026-01-05 01:01:53.181263 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-05 01:01:53.181276 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-05 01:01:53.181343 | orchestrator | 2026-01-05 01:01:53.181355 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-05 01:01:53.181364 | orchestrator | Monday 05 January 2026 01:01:02 +0000 (0:00:07.690) 0:00:16.188 ******** 2026-01-05 01:01:53.181388 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-05 01:01:53.181399 | orchestrator | 2026-01-05 01:01:53.181409 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-05 01:01:53.181419 | orchestrator | Monday 05 January 2026 01:01:05 +0000 (0:00:03.279) 0:00:19.468 ******** 2026-01-05 01:01:53.181429 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:01:53.181438 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-05 01:01:53.181448 | orchestrator | 2026-01-05 01:01:53.181458 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-05 01:01:53.181468 | orchestrator | Monday 05 January 2026 01:01:10 +0000 (0:00:04.983) 0:00:24.451 ******** 2026-01-05 01:01:53.181477 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-05 01:01:53.181487 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-05 01:01:53.181497 | orchestrator | 2026-01-05 01:01:53.181506 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-05 01:01:53.181516 | orchestrator | Monday 05 January 2026 01:01:18 +0000 (0:00:07.779) 0:00:32.230 ******** 2026-01-05 01:01:53.181525 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-05 01:01:53.181544 | orchestrator | 2026-01-05 01:01:53.181555 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:53.181564 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.181575 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.181585 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.181595 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.181605 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.181636 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.181647 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.181657 | orchestrator | 2026-01-05 01:01:53.181667 | orchestrator | 2026-01-05 01:01:53.181677 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:53.181687 | orchestrator | Monday 05 January 2026 01:01:25 +0000 (0:00:06.347) 0:00:38.578 ******** 2026-01-05 01:01:53.181696 | orchestrator | =============================================================================== 2026-01-05 01:01:53.181706 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.78s 2026-01-05 01:01:53.181716 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.69s 2026-01-05 01:01:53.181727 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.35s 2026-01-05 01:01:53.181744 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.98s 2026-01-05 01:01:53.181784 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.47s 2026-01-05 01:01:53.181801 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.28s 2026-01-05 01:01:53.181818 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.97s 2026-01-05 01:01:53.181833 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.67s 2026-01-05 01:01:53.181849 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.03s 2026-01-05 01:01:53.181865 | orchestrator | 2026-01-05 01:01:53.181879 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:01:53.181895 | orchestrator | 2.16.14 2026-01-05 01:01:53.181911 | orchestrator | 2026-01-05 01:01:53.181953 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-05 01:01:53.181969 | orchestrator | 2026-01-05 01:01:53.181985 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-05 01:01:53.182000 | orchestrator | Monday 05 January 2026 01:00:37 +0000 (0:00:00.290) 0:00:00.290 ******** 2026-01-05 01:01:53.182084 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:53.182104 | orchestrator | 2026-01-05 01:01:53.182121 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-05 01:01:53.182138 | orchestrator | Monday 05 January 2026 01:00:39 +0000 (0:00:02.105) 0:00:02.396 ******** 2026-01-05 01:01:53.182155 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:53.182172 | orchestrator | 2026-01-05 01:01:53.182190 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-05 01:01:53.182207 | orchestrator | Monday 05 January 2026 01:00:40 +0000 (0:00:01.207) 0:00:03.603 ******** 2026-01-05 01:01:53.182222 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:53.182258 | orchestrator | 2026-01-05 01:01:53.182292 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-05 01:01:53.182326 | orchestrator | Monday 05 January 2026 01:00:42 +0000 (0:00:01.642) 0:00:05.246 ******** 2026-01-05 01:01:53.182343 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:53.182361 | orchestrator | 2026-01-05 01:01:53.182378 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-05 01:01:53.182396 | orchestrator | Monday 05 January 2026 01:00:43 +0000 (0:00:01.257) 0:00:06.504 ******** 2026-01-05 01:01:53.182413 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:53.182431 | orchestrator | 2026-01-05 01:01:53.182459 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-05 01:01:53.182477 | orchestrator | Monday 05 January 2026 01:00:44 +0000 (0:00:01.422) 0:00:07.926 ******** 2026-01-05 01:01:53.182493 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:53.182509 | orchestrator | 2026-01-05 01:01:53.182527 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-05 01:01:53.182545 | orchestrator | Monday 05 January 2026 01:00:46 +0000 (0:00:01.186) 0:00:09.112 ******** 2026-01-05 01:01:53.182562 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:53.182580 | orchestrator | 2026-01-05 01:01:53.182598 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-05 01:01:53.182616 | orchestrator | Monday 05 January 2026 01:00:48 +0000 (0:00:02.095) 0:00:11.208 ******** 2026-01-05 01:01:53.182634 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:53.182650 | orchestrator | 2026-01-05 01:01:53.182666 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-05 01:01:53.182682 | orchestrator | Monday 05 January 2026 01:00:49 +0000 (0:00:01.266) 0:00:12.474 ******** 2026-01-05 01:01:53.182699 | orchestrator | changed: [testbed-manager] 2026-01-05 01:01:53.182717 | orchestrator | 2026-01-05 01:01:53.182735 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-05 01:01:53.182751 | orchestrator | Monday 05 January 2026 01:01:27 +0000 (0:00:38.421) 0:00:50.896 ******** 2026-01-05 01:01:53.182810 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:01:53.182827 | orchestrator | 2026-01-05 01:01:53.182843 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:01:53.182858 | orchestrator | 2026-01-05 01:01:53.182873 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:01:53.182888 | orchestrator | Monday 05 January 2026 01:01:28 +0000 (0:00:00.140) 0:00:51.037 ******** 2026-01-05 01:01:53.182904 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:53.182919 | orchestrator | 2026-01-05 01:01:53.182935 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:01:53.182951 | orchestrator | 2026-01-05 01:01:53.182967 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:01:53.183000 | orchestrator | Monday 05 January 2026 01:01:39 +0000 (0:00:11.786) 0:01:02.824 ******** 2026-01-05 01:01:53.183030 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:53.183046 | orchestrator | 2026-01-05 01:01:53.183062 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:01:53.183077 | orchestrator | 2026-01-05 01:01:53.183093 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:01:53.183138 | orchestrator | Monday 05 January 2026 01:01:51 +0000 (0:00:11.242) 0:01:14.066 ******** 2026-01-05 01:01:53.183155 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:53.183170 | orchestrator | 2026-01-05 01:01:53.183186 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:53.183203 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:01:53.183220 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.183237 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.183271 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:53.183288 | orchestrator | 2026-01-05 01:01:53.183304 | orchestrator | 2026-01-05 01:01:53.183320 | orchestrator | 2026-01-05 01:01:53.183335 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:53.183352 | orchestrator | Monday 05 January 2026 01:01:52 +0000 (0:00:00.980) 0:01:15.047 ******** 2026-01-05 01:01:53.183368 | orchestrator | =============================================================================== 2026-01-05 01:01:53.183385 | orchestrator | Create admin user ------------------------------------------------------ 38.42s 2026-01-05 01:01:53.183401 | orchestrator | Restart ceph manager service ------------------------------------------- 24.01s 2026-01-05 01:01:53.183416 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.11s 2026-01-05 01:01:53.183431 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.10s 2026-01-05 01:01:53.183446 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.64s 2026-01-05 01:01:53.183462 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.42s 2026-01-05 01:01:53.183477 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.27s 2026-01-05 01:01:53.183493 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.26s 2026-01-05 01:01:53.183510 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.21s 2026-01-05 01:01:53.183526 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.19s 2026-01-05 01:01:53.183543 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-01-05 01:01:53.183560 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task cd40aaa0-a341-4bd7-aee7-3393e9600553 is in state SUCCESS 2026-01-05 01:01:53.183578 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:53.183596 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:53.183623 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:53.183641 | orchestrator | 2026-01-05 01:01:53 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:53.183655 | orchestrator | 2026-01-05 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:56.210903 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:56.211178 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:56.211896 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:56.212337 | orchestrator | 2026-01-05 01:01:56 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:56.212436 | orchestrator | 2026-01-05 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:59.258906 | orchestrator | 2026-01-05 01:01:59 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:01:59.259567 | orchestrator | 2026-01-05 01:01:59 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:01:59.260161 | orchestrator | 2026-01-05 01:01:59 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:01:59.263016 | orchestrator | 2026-01-05 01:01:59 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:01:59.263130 | orchestrator | 2026-01-05 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:02.292408 | orchestrator | 2026-01-05 01:02:02 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:02.295088 | orchestrator | 2026-01-05 01:02:02 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:02.297611 | orchestrator | 2026-01-05 01:02:02 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:02.299452 | orchestrator | 2026-01-05 01:02:02 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:02.299568 | orchestrator | 2026-01-05 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:05.332132 | orchestrator | 2026-01-05 01:02:05 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:05.332518 | orchestrator | 2026-01-05 01:02:05 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:05.333867 | orchestrator | 2026-01-05 01:02:05 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:05.334506 | orchestrator | 2026-01-05 01:02:05 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:05.334533 | orchestrator | 2026-01-05 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:08.379942 | orchestrator | 2026-01-05 01:02:08 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:08.380209 | orchestrator | 2026-01-05 01:02:08 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:08.381210 | orchestrator | 2026-01-05 01:02:08 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:08.383060 | orchestrator | 2026-01-05 01:02:08 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:08.383101 | orchestrator | 2026-01-05 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:11.408125 | orchestrator | 2026-01-05 01:02:11 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:11.408835 | orchestrator | 2026-01-05 01:02:11 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:11.409405 | orchestrator | 2026-01-05 01:02:11 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:11.410240 | orchestrator | 2026-01-05 01:02:11 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:11.410283 | orchestrator | 2026-01-05 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:14.442132 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:14.442415 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:14.444384 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:14.445034 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:14.445100 | orchestrator | 2026-01-05 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:17.529367 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:17.529545 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:17.530267 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:17.530665 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:17.530682 | orchestrator | 2026-01-05 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:20.558143 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:20.558232 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:20.558602 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:20.559185 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:20.559218 | orchestrator | 2026-01-05 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:23.606511 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:23.606605 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:23.607197 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:23.608025 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:23.608074 | orchestrator | 2026-01-05 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:26.647866 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:26.649375 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:26.652567 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:26.655043 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:26.655097 | orchestrator | 2026-01-05 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:29.689404 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:29.689538 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:29.692279 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:29.692356 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:29.692369 | orchestrator | 2026-01-05 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:32.734414 | orchestrator | 2026-01-05 01:02:32 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:32.736500 | orchestrator | 2026-01-05 01:02:32 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:32.738461 | orchestrator | 2026-01-05 01:02:32 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:32.741391 | orchestrator | 2026-01-05 01:02:32 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:32.741451 | orchestrator | 2026-01-05 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:35.786301 | orchestrator | 2026-01-05 01:02:35 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:35.787044 | orchestrator | 2026-01-05 01:02:35 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:35.788179 | orchestrator | 2026-01-05 01:02:35 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:35.790239 | orchestrator | 2026-01-05 01:02:35 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:35.790291 | orchestrator | 2026-01-05 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:38.846163 | orchestrator | 2026-01-05 01:02:38 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:38.848847 | orchestrator | 2026-01-05 01:02:38 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:38.851590 | orchestrator | 2026-01-05 01:02:38 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:38.853783 | orchestrator | 2026-01-05 01:02:38 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:38.853944 | orchestrator | 2026-01-05 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:41.905133 | orchestrator | 2026-01-05 01:02:41 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:41.907826 | orchestrator | 2026-01-05 01:02:41 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:41.909030 | orchestrator | 2026-01-05 01:02:41 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:41.912381 | orchestrator | 2026-01-05 01:02:41 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:41.912991 | orchestrator | 2026-01-05 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:44.951479 | orchestrator | 2026-01-05 01:02:44 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:44.953517 | orchestrator | 2026-01-05 01:02:44 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:44.955963 | orchestrator | 2026-01-05 01:02:44 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:44.957290 | orchestrator | 2026-01-05 01:02:44 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:44.957348 | orchestrator | 2026-01-05 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:48.001056 | orchestrator | 2026-01-05 01:02:47 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:48.003925 | orchestrator | 2026-01-05 01:02:48 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:48.005964 | orchestrator | 2026-01-05 01:02:48 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:48.008073 | orchestrator | 2026-01-05 01:02:48 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:48.008115 | orchestrator | 2026-01-05 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:51.049052 | orchestrator | 2026-01-05 01:02:51 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:51.049314 | orchestrator | 2026-01-05 01:02:51 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:51.051079 | orchestrator | 2026-01-05 01:02:51 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:51.051868 | orchestrator | 2026-01-05 01:02:51 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:51.051900 | orchestrator | 2026-01-05 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:54.093398 | orchestrator | 2026-01-05 01:02:54 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:54.094154 | orchestrator | 2026-01-05 01:02:54 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:54.095595 | orchestrator | 2026-01-05 01:02:54 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:54.096420 | orchestrator | 2026-01-05 01:02:54 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:54.096489 | orchestrator | 2026-01-05 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:57.147467 | orchestrator | 2026-01-05 01:02:57 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:02:57.148570 | orchestrator | 2026-01-05 01:02:57 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:02:57.149260 | orchestrator | 2026-01-05 01:02:57 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:02:57.150205 | orchestrator | 2026-01-05 01:02:57 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:02:57.150481 | orchestrator | 2026-01-05 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:00.196208 | orchestrator | 2026-01-05 01:03:00 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:00.198290 | orchestrator | 2026-01-05 01:03:00 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:00.199643 | orchestrator | 2026-01-05 01:03:00 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:00.201201 | orchestrator | 2026-01-05 01:03:00 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:00.201251 | orchestrator | 2026-01-05 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:03.247156 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:03.248208 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:03.250599 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:03.252557 | orchestrator | 2026-01-05 01:03:03 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:03.253026 | orchestrator | 2026-01-05 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:06.299131 | orchestrator | 2026-01-05 01:03:06 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:06.300948 | orchestrator | 2026-01-05 01:03:06 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:06.302446 | orchestrator | 2026-01-05 01:03:06 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:06.303571 | orchestrator | 2026-01-05 01:03:06 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:06.303640 | orchestrator | 2026-01-05 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:09.340103 | orchestrator | 2026-01-05 01:03:09 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:09.340246 | orchestrator | 2026-01-05 01:03:09 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:09.340300 | orchestrator | 2026-01-05 01:03:09 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:09.341275 | orchestrator | 2026-01-05 01:03:09 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:09.341316 | orchestrator | 2026-01-05 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:12.381349 | orchestrator | 2026-01-05 01:03:12 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:12.381570 | orchestrator | 2026-01-05 01:03:12 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:12.382594 | orchestrator | 2026-01-05 01:03:12 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:12.383324 | orchestrator | 2026-01-05 01:03:12 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:12.383364 | orchestrator | 2026-01-05 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:15.430145 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:15.430259 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:15.430919 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:15.432047 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:15.432096 | orchestrator | 2026-01-05 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:18.466279 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:18.467904 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:18.467988 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:18.468048 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:18.468077 | orchestrator | 2026-01-05 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:21.506262 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:21.506491 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:21.507222 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:21.508179 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:21.508219 | orchestrator | 2026-01-05 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:24.546743 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:24.548773 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:24.548846 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:24.548855 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:24.548865 | orchestrator | 2026-01-05 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:27.585043 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:27.586432 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:27.587549 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:27.590535 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:27.590576 | orchestrator | 2026-01-05 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:30.621757 | orchestrator | 2026-01-05 01:03:30 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:30.622075 | orchestrator | 2026-01-05 01:03:30 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:30.623322 | orchestrator | 2026-01-05 01:03:30 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:30.625130 | orchestrator | 2026-01-05 01:03:30 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:30.625172 | orchestrator | 2026-01-05 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:33.672080 | orchestrator | 2026-01-05 01:03:33 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:33.672368 | orchestrator | 2026-01-05 01:03:33 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:33.674492 | orchestrator | 2026-01-05 01:03:33 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:33.675533 | orchestrator | 2026-01-05 01:03:33 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:33.675576 | orchestrator | 2026-01-05 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:36.713974 | orchestrator | 2026-01-05 01:03:36 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:36.715839 | orchestrator | 2026-01-05 01:03:36 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:36.718702 | orchestrator | 2026-01-05 01:03:36 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:36.720806 | orchestrator | 2026-01-05 01:03:36 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:36.720846 | orchestrator | 2026-01-05 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:39.790591 | orchestrator | 2026-01-05 01:03:39 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:39.790792 | orchestrator | 2026-01-05 01:03:39 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:39.790811 | orchestrator | 2026-01-05 01:03:39 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:39.790823 | orchestrator | 2026-01-05 01:03:39 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:39.790833 | orchestrator | 2026-01-05 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:42.966011 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:42.967487 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:42.969508 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:42.972293 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:42.972340 | orchestrator | 2026-01-05 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:46.015101 | orchestrator | 2026-01-05 01:03:46 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:46.017229 | orchestrator | 2026-01-05 01:03:46 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:46.019382 | orchestrator | 2026-01-05 01:03:46 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:46.020720 | orchestrator | 2026-01-05 01:03:46 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state STARTED 2026-01-05 01:03:46.020758 | orchestrator | 2026-01-05 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:49.067713 | orchestrator | 2026-01-05 01:03:49 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:49.068881 | orchestrator | 2026-01-05 01:03:49 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state STARTED 2026-01-05 01:03:49.072503 | orchestrator | 2026-01-05 01:03:49 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:49.074187 | orchestrator | 2026-01-05 01:03:49 | INFO  | Task 16a826bd-c15c-483b-a19f-9560b4c71383 is in state SUCCESS 2026-01-05 01:03:49.074219 | orchestrator | 2026-01-05 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:49.076214 | orchestrator | 2026-01-05 01:03:49.076278 | orchestrator | 2026-01-05 01:03:49.076286 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:03:49.076292 | orchestrator | 2026-01-05 01:03:49.076297 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:03:49.076303 | orchestrator | Monday 05 January 2026 01:00:48 +0000 (0:00:00.765) 0:00:00.765 ******** 2026-01-05 01:03:49.076308 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:03:49.076315 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:03:49.076319 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:03:49.076324 | orchestrator | 2026-01-05 01:03:49.076329 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:03:49.076334 | orchestrator | Monday 05 January 2026 01:00:49 +0000 (0:00:00.769) 0:00:01.535 ******** 2026-01-05 01:03:49.076339 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-05 01:03:49.076344 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-05 01:03:49.076348 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-05 01:03:49.076353 | orchestrator | 2026-01-05 01:03:49.076358 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-05 01:03:49.076362 | orchestrator | 2026-01-05 01:03:49.076368 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:03:49.076373 | orchestrator | Monday 05 January 2026 01:00:50 +0000 (0:00:00.955) 0:00:02.491 ******** 2026-01-05 01:03:49.076378 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:49.076384 | orchestrator | 2026-01-05 01:03:49.076389 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-05 01:03:49.076393 | orchestrator | Monday 05 January 2026 01:00:50 +0000 (0:00:00.610) 0:00:03.102 ******** 2026-01-05 01:03:49.076399 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-05 01:03:49.076403 | orchestrator | 2026-01-05 01:03:49.076489 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-05 01:03:49.076495 | orchestrator | Monday 05 January 2026 01:00:54 +0000 (0:00:03.158) 0:00:06.261 ******** 2026-01-05 01:03:49.076500 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-05 01:03:49.076506 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-05 01:03:49.076511 | orchestrator | 2026-01-05 01:03:49.076619 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-05 01:03:49.076624 | orchestrator | Monday 05 January 2026 01:00:59 +0000 (0:00:05.775) 0:00:12.036 ******** 2026-01-05 01:03:49.076629 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:03:49.076634 | orchestrator | 2026-01-05 01:03:49.076639 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-05 01:03:49.076644 | orchestrator | Monday 05 January 2026 01:01:02 +0000 (0:00:02.865) 0:00:14.902 ******** 2026-01-05 01:03:49.076649 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:03:49.076654 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-05 01:03:49.076659 | orchestrator | 2026-01-05 01:03:49.076664 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-05 01:03:49.076668 | orchestrator | Monday 05 January 2026 01:01:06 +0000 (0:00:03.989) 0:00:18.891 ******** 2026-01-05 01:03:49.076673 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:03:49.076678 | orchestrator | 2026-01-05 01:03:49.076692 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-05 01:03:49.076697 | orchestrator | Monday 05 January 2026 01:01:10 +0000 (0:00:03.991) 0:00:22.882 ******** 2026-01-05 01:03:49.076702 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-05 01:03:49.076707 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-05 01:03:49.076712 | orchestrator | 2026-01-05 01:03:49.076716 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-05 01:03:49.076721 | orchestrator | Monday 05 January 2026 01:01:18 +0000 (0:00:07.412) 0:00:30.295 ******** 2026-01-05 01:03:49.076730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.076754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.076764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.076779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.076792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.076801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.076812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.076828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.076837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.076851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.076861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.076867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.076873 | orchestrator | 2026-01-05 01:03:49.076879 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:03:49.076885 | orchestrator | Monday 05 January 2026 01:01:21 +0000 (0:00:03.099) 0:00:33.395 ******** 2026-01-05 01:03:49.076891 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:49.076898 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:49.076905 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:49.076912 | orchestrator | 2026-01-05 01:03:49.076921 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:03:49.076930 | orchestrator | Monday 05 January 2026 01:01:21 +0000 (0:00:00.482) 0:00:33.877 ******** 2026-01-05 01:03:49.076940 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:49.076980 | orchestrator | 2026-01-05 01:03:49.076998 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-05 01:03:49.077008 | orchestrator | Monday 05 January 2026 01:01:22 +0000 (0:00:01.168) 0:00:35.045 ******** 2026-01-05 01:03:49.077018 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-05 01:03:49.077027 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-05 01:03:49.077035 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-05 01:03:49.077050 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-05 01:03:49.077057 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-05 01:03:49.077062 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-05 01:03:49.077066 | orchestrator | 2026-01-05 01:03:49.077072 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-05 01:03:49.077076 | orchestrator | Monday 05 January 2026 01:01:25 +0000 (0:00:02.202) 0:00:37.248 ******** 2026-01-05 01:03:49.077082 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:03:49.077089 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:03:49.077101 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:03:49.077109 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:03:49.077122 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:03:49.077135 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:03:49.077144 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:03:49.077157 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:03:49.077167 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:03:49.077178 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:03:49.077189 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:03:49.077194 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:03:49.077199 | orchestrator | 2026-01-05 01:03:49.077204 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-05 01:03:49.077209 | orchestrator | Monday 05 January 2026 01:01:28 +0000 (0:00:03.336) 0:00:40.585 ******** 2026-01-05 01:03:49.077214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:03:49.077219 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:03:49.077227 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:03:49.077233 | orchestrator | 2026-01-05 01:03:49.077238 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-05 01:03:49.077243 | orchestrator | Monday 05 January 2026 01:01:30 +0000 (0:00:01.949) 0:00:42.535 ******** 2026-01-05 01:03:49.077248 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-05 01:03:49.077253 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-05 01:03:49.077259 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-05 01:03:49.077264 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:03:49.077269 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:03:49.077274 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:03:49.077279 | orchestrator | 2026-01-05 01:03:49.077284 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-05 01:03:49.077289 | orchestrator | Monday 05 January 2026 01:01:33 +0000 (0:00:03.252) 0:00:45.788 ******** 2026-01-05 01:03:49.077293 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-05 01:03:49.077302 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-05 01:03:49.077307 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-05 01:03:49.077312 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-05 01:03:49.077317 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-05 01:03:49.077322 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-05 01:03:49.077327 | orchestrator | 2026-01-05 01:03:49.077333 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-05 01:03:49.077341 | orchestrator | Monday 05 January 2026 01:01:34 +0000 (0:00:01.358) 0:00:47.146 ******** 2026-01-05 01:03:49.077350 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:49.077358 | orchestrator | 2026-01-05 01:03:49.077367 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-05 01:03:49.077375 | orchestrator | Monday 05 January 2026 01:01:35 +0000 (0:00:00.156) 0:00:47.303 ******** 2026-01-05 01:03:49.077383 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:49.077393 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:49.077404 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:49.077413 | orchestrator | 2026-01-05 01:03:49.077420 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:03:49.077424 | orchestrator | Monday 05 January 2026 01:01:35 +0000 (0:00:00.408) 0:00:47.711 ******** 2026-01-05 01:03:49.077429 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:49.077434 | orchestrator | 2026-01-05 01:03:49.077438 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-05 01:03:49.077443 | orchestrator | Monday 05 January 2026 01:01:36 +0000 (0:00:00.788) 0:00:48.499 ******** 2026-01-05 01:03:49.077449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.077454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.077463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.077477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.077492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.077503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.077512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.077523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.077532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.077543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.077792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.077812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.077818 | orchestrator | 2026-01-05 01:03:49.077823 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-05 01:03:49.077828 | orchestrator | Monday 05 January 2026 01:01:41 +0000 (0:00:04.901) 0:00:53.401 ******** 2026-01-05 01:03:49.077833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:03:49.077843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077868 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:49.077873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:03:49.077878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077899 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:49.077904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:03:49.077913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077944 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:49.077953 | orchestrator | 2026-01-05 01:03:49.077961 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-05 01:03:49.077971 | orchestrator | Monday 05 January 2026 01:01:43 +0000 (0:00:01.798) 0:00:55.199 ******** 2026-01-05 01:03:49.077981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:03:49.077986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.077999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078077 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:49.078087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:03:49.078100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078120 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:49.078126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:03:49.078132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078153 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:49.078158 | orchestrator | 2026-01-05 01:03:49.078163 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-05 01:03:49.078168 | orchestrator | Monday 05 January 2026 01:01:46 +0000 (0:00:03.059) 0:00:58.258 ******** 2026-01-05 01:03:49.078172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.078180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.078186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.078195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078258 | orchestrator | 2026-01-05 01:03:49.078263 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-05 01:03:49.078268 | orchestrator | Monday 05 January 2026 01:01:50 +0000 (0:00:04.618) 0:01:02.877 ******** 2026-01-05 01:03:49.078273 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-05 01:03:49.078280 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-05 01:03:49.078285 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-05 01:03:49.078305 | orchestrator | 2026-01-05 01:03:49.078310 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-05 01:03:49.078315 | orchestrator | Monday 05 January 2026 01:01:52 +0000 (0:00:01.935) 0:01:04.812 ******** 2026-01-05 01:03:49.078320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.078333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.078341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.078346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078420 | orchestrator | 2026-01-05 01:03:49.078425 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-05 01:03:49.078430 | orchestrator | Monday 05 January 2026 01:02:05 +0000 (0:00:13.169) 0:01:17.981 ******** 2026-01-05 01:03:49.078434 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:49.078439 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:49.078444 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:49.078448 | orchestrator | 2026-01-05 01:03:49.078453 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-05 01:03:49.078458 | orchestrator | Monday 05 January 2026 01:02:07 +0000 (0:00:01.878) 0:01:19.860 ******** 2026-01-05 01:03:49.078463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:03:49.078471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078495 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:49.078500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:03:49.078505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078524 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:49.078531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:03:49.078541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:03:49.078556 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:49.078561 | orchestrator | 2026-01-05 01:03:49.078566 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-05 01:03:49.078571 | orchestrator | Monday 05 January 2026 01:02:08 +0000 (0:00:01.055) 0:01:20.915 ******** 2026-01-05 01:03:49.078576 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:49.078629 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:49.078636 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:49.078640 | orchestrator | 2026-01-05 01:03:49.078645 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-05 01:03:49.078651 | orchestrator | Monday 05 January 2026 01:02:09 +0000 (0:00:00.300) 0:01:21.216 ******** 2026-01-05 01:03:49.078656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.078668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.078674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:03:49.078704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:03:49.078806 | orchestrator | 2026-01-05 01:03:49.078814 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:03:49.078821 | orchestrator | Monday 05 January 2026 01:02:12 +0000 (0:00:03.110) 0:01:24.326 ******** 2026-01-05 01:03:49.078829 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:49.078837 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:49.078845 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:49.078853 | orchestrator | 2026-01-05 01:03:49.078862 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-05 01:03:49.078870 | orchestrator | Monday 05 January 2026 01:02:12 +0000 (0:00:00.383) 0:01:24.709 ******** 2026-01-05 01:03:49.078878 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:49.078886 | orchestrator | 2026-01-05 01:03:49.078894 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-05 01:03:49.078902 | orchestrator | Monday 05 January 2026 01:02:14 +0000 (0:00:01.934) 0:01:26.643 ******** 2026-01-05 01:03:49.078910 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:49.078918 | orchestrator | 2026-01-05 01:03:49.078926 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-05 01:03:49.078941 | orchestrator | Monday 05 January 2026 01:02:16 +0000 (0:00:02.030) 0:01:28.674 ******** 2026-01-05 01:03:49.078950 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:49.078958 | orchestrator | 2026-01-05 01:03:49.078965 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-05 01:03:49.078972 | orchestrator | Monday 05 January 2026 01:02:35 +0000 (0:00:19.317) 0:01:47.992 ******** 2026-01-05 01:03:49.078977 | orchestrator | 2026-01-05 01:03:49.078982 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-05 01:03:49.078989 | orchestrator | Monday 05 January 2026 01:02:35 +0000 (0:00:00.067) 0:01:48.059 ******** 2026-01-05 01:03:49.078997 | orchestrator | 2026-01-05 01:03:49.079004 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-05 01:03:49.079011 | orchestrator | Monday 05 January 2026 01:02:35 +0000 (0:00:00.067) 0:01:48.126 ******** 2026-01-05 01:03:49.079019 | orchestrator | 2026-01-05 01:03:49.079027 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-05 01:03:49.079036 | orchestrator | Monday 05 January 2026 01:02:36 +0000 (0:00:00.069) 0:01:48.196 ******** 2026-01-05 01:03:49.079044 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:49.079052 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:49.079060 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:49.079068 | orchestrator | 2026-01-05 01:03:49.079076 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-05 01:03:49.079083 | orchestrator | Monday 05 January 2026 01:03:06 +0000 (0:00:30.562) 0:02:18.759 ******** 2026-01-05 01:03:49.079092 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:49.079099 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:49.079107 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:49.079112 | orchestrator | 2026-01-05 01:03:49.079117 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-05 01:03:49.079122 | orchestrator | Monday 05 January 2026 01:03:14 +0000 (0:00:08.217) 0:02:26.977 ******** 2026-01-05 01:03:49.079126 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:49.079131 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:49.079135 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:49.079140 | orchestrator | 2026-01-05 01:03:49.079145 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-05 01:03:49.079149 | orchestrator | Monday 05 January 2026 01:03:38 +0000 (0:00:23.422) 0:02:50.399 ******** 2026-01-05 01:03:49.079154 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:49.079165 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:49.079170 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:49.079175 | orchestrator | 2026-01-05 01:03:49.079179 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-05 01:03:49.079185 | orchestrator | Monday 05 January 2026 01:03:48 +0000 (0:00:10.040) 0:03:00.439 ******** 2026-01-05 01:03:49.079189 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:49.079194 | orchestrator | 2026-01-05 01:03:49.079199 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:03:49.079205 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 01:03:49.079212 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:03:49.079221 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:03:49.079226 | orchestrator | 2026-01-05 01:03:49.079231 | orchestrator | 2026-01-05 01:03:49.079236 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:03:49.079241 | orchestrator | Monday 05 January 2026 01:03:48 +0000 (0:00:00.266) 0:03:00.706 ******** 2026-01-05 01:03:49.079246 | orchestrator | =============================================================================== 2026-01-05 01:03:49.079250 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 30.56s 2026-01-05 01:03:49.079255 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.42s 2026-01-05 01:03:49.079260 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.32s 2026-01-05 01:03:49.079265 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.17s 2026-01-05 01:03:49.079269 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.04s 2026-01-05 01:03:49.079274 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.22s 2026-01-05 01:03:49.079278 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.41s 2026-01-05 01:03:49.079284 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.78s 2026-01-05 01:03:49.079288 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.90s 2026-01-05 01:03:49.079293 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.62s 2026-01-05 01:03:49.079298 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.99s 2026-01-05 01:03:49.079303 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.99s 2026-01-05 01:03:49.079308 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.34s 2026-01-05 01:03:49.079313 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.25s 2026-01-05 01:03:49.079317 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.16s 2026-01-05 01:03:49.079322 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.11s 2026-01-05 01:03:49.079327 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.10s 2026-01-05 01:03:49.079341 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.06s 2026-01-05 01:03:49.079354 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.87s 2026-01-05 01:03:49.079364 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.20s 2026-01-05 01:03:52.110273 | orchestrator | 2026-01-05 01:03:52 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:03:52.110477 | orchestrator | 2026-01-05 01:03:52 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:52.111985 | orchestrator | 2026-01-05 01:03:52 | INFO  | Task 3e476df6-fe15-4a3d-86c7-56a3b53d6a93 is in state SUCCESS 2026-01-05 01:03:52.112040 | orchestrator | 2026-01-05 01:03:52.113596 | orchestrator | 2026-01-05 01:03:52.113663 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:03:52.113676 | orchestrator | 2026-01-05 01:03:52.113685 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:03:52.113694 | orchestrator | Monday 05 January 2026 01:00:46 +0000 (0:00:00.380) 0:00:00.380 ******** 2026-01-05 01:03:52.113703 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:03:52.113713 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:03:52.113722 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:03:52.113730 | orchestrator | 2026-01-05 01:03:52.113738 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:03:52.113747 | orchestrator | Monday 05 January 2026 01:00:47 +0000 (0:00:00.366) 0:00:00.747 ******** 2026-01-05 01:03:52.113754 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-05 01:03:52.113763 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-05 01:03:52.113773 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-05 01:03:52.113782 | orchestrator | 2026-01-05 01:03:52.113790 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-05 01:03:52.113799 | orchestrator | 2026-01-05 01:03:52.113808 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-05 01:03:52.113818 | orchestrator | Monday 05 January 2026 01:00:47 +0000 (0:00:00.513) 0:00:01.261 ******** 2026-01-05 01:03:52.113827 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:52.113836 | orchestrator | 2026-01-05 01:03:52.113845 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-05 01:03:52.113853 | orchestrator | Monday 05 January 2026 01:00:48 +0000 (0:00:01.169) 0:00:02.431 ******** 2026-01-05 01:03:52.113861 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-05 01:03:52.113869 | orchestrator | 2026-01-05 01:03:52.113878 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-05 01:03:52.113887 | orchestrator | Monday 05 January 2026 01:00:52 +0000 (0:00:04.253) 0:00:06.685 ******** 2026-01-05 01:03:52.113896 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-05 01:03:52.113905 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-05 01:03:52.113914 | orchestrator | 2026-01-05 01:03:52.113922 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-05 01:03:52.113931 | orchestrator | Monday 05 January 2026 01:00:58 +0000 (0:00:05.760) 0:00:12.445 ******** 2026-01-05 01:03:52.113955 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-05 01:03:52.113965 | orchestrator | 2026-01-05 01:03:52.113974 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-05 01:03:52.113982 | orchestrator | Monday 05 January 2026 01:01:01 +0000 (0:00:02.969) 0:00:15.415 ******** 2026-01-05 01:03:52.113991 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:03:52.114000 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-05 01:03:52.114009 | orchestrator | 2026-01-05 01:03:52.114058 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-05 01:03:52.114068 | orchestrator | Monday 05 January 2026 01:01:05 +0000 (0:00:04.025) 0:00:19.440 ******** 2026-01-05 01:03:52.114076 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:03:52.114085 | orchestrator | 2026-01-05 01:03:52.114093 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-05 01:03:52.114101 | orchestrator | Monday 05 January 2026 01:01:09 +0000 (0:00:03.963) 0:00:23.404 ******** 2026-01-05 01:03:52.114108 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-05 01:03:52.114116 | orchestrator | 2026-01-05 01:03:52.114125 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-05 01:03:52.114170 | orchestrator | Monday 05 January 2026 01:01:13 +0000 (0:00:04.087) 0:00:27.492 ******** 2026-01-05 01:03:52.114209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.114233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.114247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.114265 | orchestrator | 2026-01-05 01:03:52.114274 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-05 01:03:52.114283 | orchestrator | Monday 05 January 2026 01:01:20 +0000 (0:00:06.729) 0:00:34.221 ******** 2026-01-05 01:03:52.114293 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:52.114301 | orchestrator | 2026-01-05 01:03:52.114317 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-05 01:03:52.114327 | orchestrator | Monday 05 January 2026 01:01:21 +0000 (0:00:01.063) 0:00:35.285 ******** 2026-01-05 01:03:52.114335 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:52.114344 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:52.114352 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:52.114361 | orchestrator | 2026-01-05 01:03:52.114369 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-05 01:03:52.114378 | orchestrator | Monday 05 January 2026 01:01:26 +0000 (0:00:04.501) 0:00:39.787 ******** 2026-01-05 01:03:52.114387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:03:52.114396 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:03:52.114405 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:03:52.114415 | orchestrator | 2026-01-05 01:03:52.114423 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-05 01:03:52.114431 | orchestrator | Monday 05 January 2026 01:01:27 +0000 (0:00:01.601) 0:00:41.389 ******** 2026-01-05 01:03:52.114440 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:03:52.114449 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:03:52.114457 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:03:52.114466 | orchestrator | 2026-01-05 01:03:52.114475 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-05 01:03:52.114483 | orchestrator | Monday 05 January 2026 01:01:28 +0000 (0:00:01.201) 0:00:42.590 ******** 2026-01-05 01:03:52.114492 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:03:52.114501 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:03:52.114510 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:03:52.114519 | orchestrator | 2026-01-05 01:03:52.114528 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-05 01:03:52.114537 | orchestrator | Monday 05 January 2026 01:01:29 +0000 (0:00:00.722) 0:00:43.312 ******** 2026-01-05 01:03:52.114553 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.114563 | orchestrator | 2026-01-05 01:03:52.114590 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-05 01:03:52.114606 | orchestrator | Monday 05 January 2026 01:01:29 +0000 (0:00:00.301) 0:00:43.614 ******** 2026-01-05 01:03:52.114615 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.114623 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.114632 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.114641 | orchestrator | 2026-01-05 01:03:52.114650 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-05 01:03:52.114658 | orchestrator | Monday 05 January 2026 01:01:30 +0000 (0:00:00.282) 0:00:43.896 ******** 2026-01-05 01:03:52.114667 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:52.114675 | orchestrator | 2026-01-05 01:03:52.114685 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-05 01:03:52.114690 | orchestrator | Monday 05 January 2026 01:01:31 +0000 (0:00:00.828) 0:00:44.725 ******** 2026-01-05 01:03:52.114703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.114713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.114733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.114744 | orchestrator | 2026-01-05 01:03:52.114752 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-05 01:03:52.114761 | orchestrator | Monday 05 January 2026 01:01:36 +0000 (0:00:05.429) 0:00:50.154 ******** 2026-01-05 01:03:52.114777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:03:52.114803 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.114818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:03:52.114827 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.114842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:03:52.114852 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.114860 | orchestrator | 2026-01-05 01:03:52.114868 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-05 01:03:52.114876 | orchestrator | Monday 05 January 2026 01:01:40 +0000 (0:00:04.077) 0:00:54.231 ******** 2026-01-05 01:03:52.114895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:03:52.114904 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.114918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:03:52.114928 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.114937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:03:52.114956 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.114966 | orchestrator | 2026-01-05 01:03:52.114974 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-05 01:03:52.114981 | orchestrator | Monday 05 January 2026 01:01:46 +0000 (0:00:06.261) 0:01:00.493 ******** 2026-01-05 01:03:52.114990 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.114999 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.115007 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.115015 | orchestrator | 2026-01-05 01:03:52.115024 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-05 01:03:52.115033 | orchestrator | Monday 05 January 2026 01:01:50 +0000 (0:00:04.102) 0:01:04.596 ******** 2026-01-05 01:03:52.115044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.115060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.115080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.115091 | orchestrator | 2026-01-05 01:03:52.115099 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-05 01:03:52.115108 | orchestrator | Monday 05 January 2026 01:01:55 +0000 (0:00:04.589) 0:01:09.185 ******** 2026-01-05 01:03:52.115116 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:52.115124 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:52.115132 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:52.115140 | orchestrator | 2026-01-05 01:03:52.115149 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-05 01:03:52.115157 | orchestrator | Monday 05 January 2026 01:02:02 +0000 (0:00:07.283) 0:01:16.469 ******** 2026-01-05 01:03:52.115165 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.115173 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.115179 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.115184 | orchestrator | 2026-01-05 01:03:52.115189 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-05 01:03:52.115198 | orchestrator | Monday 05 January 2026 01:02:07 +0000 (0:00:04.969) 0:01:21.439 ******** 2026-01-05 01:03:52.115203 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.115213 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.115218 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.115223 | orchestrator | 2026-01-05 01:03:52.115228 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-05 01:03:52.115233 | orchestrator | Monday 05 January 2026 01:02:11 +0000 (0:00:03.891) 0:01:25.331 ******** 2026-01-05 01:03:52.115239 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.115244 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.115249 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.115254 | orchestrator | 2026-01-05 01:03:52.115259 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-05 01:03:52.115264 | orchestrator | Monday 05 January 2026 01:02:15 +0000 (0:00:03.719) 0:01:29.050 ******** 2026-01-05 01:03:52.115269 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.115274 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.115279 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.115284 | orchestrator | 2026-01-05 01:03:52.115290 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-05 01:03:52.115295 | orchestrator | Monday 05 January 2026 01:02:18 +0000 (0:00:03.138) 0:01:32.189 ******** 2026-01-05 01:03:52.115300 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.115305 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.115310 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.115315 | orchestrator | 2026-01-05 01:03:52.115321 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-05 01:03:52.115326 | orchestrator | Monday 05 January 2026 01:02:18 +0000 (0:00:00.274) 0:01:32.464 ******** 2026-01-05 01:03:52.115331 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-05 01:03:52.115337 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.115342 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-05 01:03:52.115347 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.115352 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-05 01:03:52.115358 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.115363 | orchestrator | 2026-01-05 01:03:52.115368 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-05 01:03:52.115373 | orchestrator | Monday 05 January 2026 01:02:22 +0000 (0:00:03.843) 0:01:36.307 ******** 2026-01-05 01:03:52.115378 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:52.115383 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:52.115388 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:52.115393 | orchestrator | 2026-01-05 01:03:52.115398 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-05 01:03:52.115407 | orchestrator | Monday 05 January 2026 01:02:27 +0000 (0:00:04.423) 0:01:40.731 ******** 2026-01-05 01:03:52.115413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.115429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.115438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:03:52.115449 | orchestrator | 2026-01-05 01:03:52.115455 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-05 01:03:52.115460 | orchestrator | Monday 05 January 2026 01:02:31 +0000 (0:00:04.035) 0:01:44.767 ******** 2026-01-05 01:03:52.115465 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:52.115470 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:52.115475 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:52.115480 | orchestrator | 2026-01-05 01:03:52.115485 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-05 01:03:52.115490 | orchestrator | Monday 05 January 2026 01:02:31 +0000 (0:00:00.350) 0:01:45.117 ******** 2026-01-05 01:03:52.115496 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:52.115501 | orchestrator | 2026-01-05 01:03:52.115506 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-05 01:03:52.115511 | orchestrator | Monday 05 January 2026 01:02:33 +0000 (0:00:02.108) 0:01:47.226 ******** 2026-01-05 01:03:52.115516 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:52.115521 | orchestrator | 2026-01-05 01:03:52.115526 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-05 01:03:52.115531 | orchestrator | Monday 05 January 2026 01:02:35 +0000 (0:00:02.340) 0:01:49.567 ******** 2026-01-05 01:03:52.115536 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:52.115541 | orchestrator | 2026-01-05 01:03:52.115546 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-05 01:03:52.115552 | orchestrator | Monday 05 January 2026 01:02:38 +0000 (0:00:02.195) 0:01:51.762 ******** 2026-01-05 01:03:52.115557 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:52.115562 | orchestrator | 2026-01-05 01:03:52.115567 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-05 01:03:52.115610 | orchestrator | Monday 05 January 2026 01:03:06 +0000 (0:00:28.551) 0:02:20.314 ******** 2026-01-05 01:03:52.115618 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:52.115623 | orchestrator | 2026-01-05 01:03:52.115628 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-05 01:03:52.115633 | orchestrator | Monday 05 January 2026 01:03:08 +0000 (0:00:02.268) 0:02:22.582 ******** 2026-01-05 01:03:52.115638 | orchestrator | 2026-01-05 01:03:52.115644 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-05 01:03:52.115649 | orchestrator | Monday 05 January 2026 01:03:09 +0000 (0:00:00.596) 0:02:23.178 ******** 2026-01-05 01:03:52.115654 | orchestrator | 2026-01-05 01:03:52.115659 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-05 01:03:52.115664 | orchestrator | Monday 05 January 2026 01:03:09 +0000 (0:00:00.067) 0:02:23.246 ******** 2026-01-05 01:03:52.115669 | orchestrator | 2026-01-05 01:03:52.115674 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-05 01:03:52.115679 | orchestrator | Monday 05 January 2026 01:03:09 +0000 (0:00:00.072) 0:02:23.318 ******** 2026-01-05 01:03:52.115684 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:52.115690 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:52.115695 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:52.115700 | orchestrator | 2026-01-05 01:03:52.115705 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:03:52.115711 | orchestrator | testbed-node-0 : ok=27  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:03:52.115719 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:03:52.115724 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:03:52.115734 | orchestrator | 2026-01-05 01:03:52.115739 | orchestrator | 2026-01-05 01:03:52.115744 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:03:52.115750 | orchestrator | Monday 05 January 2026 01:03:51 +0000 (0:00:41.715) 0:03:05.034 ******** 2026-01-05 01:03:52.115757 | orchestrator | =============================================================================== 2026-01-05 01:03:52.115766 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.72s 2026-01-05 01:03:52.115774 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.55s 2026-01-05 01:03:52.115785 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.28s 2026-01-05 01:03:52.115794 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.73s 2026-01-05 01:03:52.115802 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.26s 2026-01-05 01:03:52.115811 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.76s 2026-01-05 01:03:52.115820 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.43s 2026-01-05 01:03:52.115829 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.97s 2026-01-05 01:03:52.115837 | orchestrator | glance : Copying over config.json files for services -------------------- 4.59s 2026-01-05 01:03:52.115845 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.50s 2026-01-05 01:03:52.115850 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.42s 2026-01-05 01:03:52.115855 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.25s 2026-01-05 01:03:52.115860 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.10s 2026-01-05 01:03:52.115865 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.09s 2026-01-05 01:03:52.115871 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.08s 2026-01-05 01:03:52.115876 | orchestrator | glance : Check glance containers ---------------------------------------- 4.04s 2026-01-05 01:03:52.115881 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.03s 2026-01-05 01:03:52.115895 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.96s 2026-01-05 01:03:52.115900 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.89s 2026-01-05 01:03:52.115906 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.84s 2026-01-05 01:03:52.115918 | orchestrator | 2026-01-05 01:03:52 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:52.115923 | orchestrator | 2026-01-05 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:55.158239 | orchestrator | 2026-01-05 01:03:55 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:03:55.159337 | orchestrator | 2026-01-05 01:03:55 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:55.161262 | orchestrator | 2026-01-05 01:03:55 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:03:55.163343 | orchestrator | 2026-01-05 01:03:55 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:55.163922 | orchestrator | 2026-01-05 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:58.212949 | orchestrator | 2026-01-05 01:03:58 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:03:58.213715 | orchestrator | 2026-01-05 01:03:58 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:03:58.215218 | orchestrator | 2026-01-05 01:03:58 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:03:58.217260 | orchestrator | 2026-01-05 01:03:58 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:03:58.217297 | orchestrator | 2026-01-05 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:01.826868 | orchestrator | 2026-01-05 01:04:01 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:01.829087 | orchestrator | 2026-01-05 01:04:01 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:04:01.830910 | orchestrator | 2026-01-05 01:04:01 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:01.833327 | orchestrator | 2026-01-05 01:04:01 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:01.833426 | orchestrator | 2026-01-05 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:04.875543 | orchestrator | 2026-01-05 01:04:04 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:04.875693 | orchestrator | 2026-01-05 01:04:04 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state STARTED 2026-01-05 01:04:04.875700 | orchestrator | 2026-01-05 01:04:04 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:04.876327 | orchestrator | 2026-01-05 01:04:04 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:04.876398 | orchestrator | 2026-01-05 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:07.922262 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:07.924130 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:07.929285 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task 8480e478-edfc-44cd-832a-08a6ec3e6265 is in state SUCCESS 2026-01-05 01:04:07.929458 | orchestrator | 2026-01-05 01:04:07.931488 | orchestrator | 2026-01-05 01:04:07.931598 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:04:07.931609 | orchestrator | 2026-01-05 01:04:07.931617 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:04:07.931625 | orchestrator | Monday 05 January 2026 01:00:37 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-01-05 01:04:07.931633 | orchestrator | ok: [testbed-manager] 2026-01-05 01:04:07.931642 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:07.931649 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:07.931656 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:07.931663 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:04:07.931671 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:04:07.931678 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:04:07.931685 | orchestrator | 2026-01-05 01:04:07.931692 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:04:07.931699 | orchestrator | Monday 05 January 2026 01:00:38 +0000 (0:00:00.944) 0:00:01.222 ******** 2026-01-05 01:04:07.931707 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-05 01:04:07.931714 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-05 01:04:07.931721 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-05 01:04:07.931728 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-05 01:04:07.931735 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-05 01:04:07.931741 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-05 01:04:07.931748 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-05 01:04:07.931756 | orchestrator | 2026-01-05 01:04:07.931763 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-05 01:04:07.931770 | orchestrator | 2026-01-05 01:04:07.931801 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-05 01:04:07.931809 | orchestrator | Monday 05 January 2026 01:00:39 +0000 (0:00:00.730) 0:00:01.952 ******** 2026-01-05 01:04:07.931817 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:04:07.931826 | orchestrator | 2026-01-05 01:04:07.931889 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-05 01:04:07.931897 | orchestrator | Monday 05 January 2026 01:00:41 +0000 (0:00:02.203) 0:00:04.156 ******** 2026-01-05 01:04:07.931908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.931920 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 01:04:07.931929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.931949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.931971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.931978 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.931992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932024 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932099 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 01:04:07.932147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932324 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932368 | orchestrator | 2026-01-05 01:04:07.932374 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-05 01:04:07.932379 | orchestrator | Monday 05 January 2026 01:00:45 +0000 (0:00:04.084) 0:00:08.240 ******** 2026-01-05 01:04:07.932385 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:04:07.932390 | orchestrator | 2026-01-05 01:04:07.932394 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-05 01:04:07.932399 | orchestrator | Monday 05 January 2026 01:00:47 +0000 (0:00:02.044) 0:00:10.284 ******** 2026-01-05 01:04:07.932404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932409 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 01:04:07.932414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932450 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.932455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932501 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.932524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932579 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932586 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 01:04:07.932595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.932608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933026 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933062 | orchestrator | 2026-01-05 01:04:07.933066 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-05 01:04:07.933071 | orchestrator | Monday 05 January 2026 01:00:53 +0000 (0:00:06.245) 0:00:16.530 ******** 2026-01-05 01:04:07.933075 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 01:04:07.933080 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933092 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933107 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 01:04:07.933112 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933223 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:04:07.933228 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.933234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933242 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.933246 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.933250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933265 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.933269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933325 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.933329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933341 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.933345 | orchestrator | 2026-01-05 01:04:07.933351 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-05 01:04:07.933370 | orchestrator | Monday 05 January 2026 01:00:55 +0000 (0:00:01.648) 0:00:18.178 ******** 2026-01-05 01:04:07.933375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933407 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.933411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933442 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 01:04:07.933446 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933452 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933459 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 01:04:07.933469 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933476 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.933481 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:04:07.933487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:04:07.933532 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.933571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933606 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.933612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933639 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.933646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:04:07.933658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:04:07.933667 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.933671 | orchestrator | 2026-01-05 01:04:07.933675 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-05 01:04:07.933679 | orchestrator | Monday 05 January 2026 01:00:57 +0000 (0:00:02.303) 0:00:20.482 ******** 2026-01-05 01:04:07.933684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.933688 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 01:04:07.933766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.933777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.933784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.933796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933800 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.933804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933808 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.933812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.933824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933848 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933887 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 01:04:07.933892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.933906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933918 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.933925 | orchestrator | 2026-01-05 01:04:07.933929 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-05 01:04:07.933933 | orchestrator | Monday 05 January 2026 01:01:03 +0000 (0:00:06.080) 0:00:26.562 ******** 2026-01-05 01:04:07.933937 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:04:07.933941 | orchestrator | 2026-01-05 01:04:07.933945 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-05 01:04:07.933969 | orchestrator | Monday 05 January 2026 01:01:05 +0000 (0:00:01.384) 0:00:27.947 ******** 2026-01-05 01:04:07.933974 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098749, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.933979 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098749, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.933992 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098749, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934000 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098749, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934005 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098749, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934009 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098749, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934053 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099065, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6472452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934060 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099065, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6472452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934064 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099065, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6472452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934102 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099065, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6472452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934108 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099065, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6472452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934112 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098744, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934116 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098749, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.934120 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099065, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6472452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934124 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098744, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934128 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098744, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934144 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098744, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934151 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098763, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.640029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934157 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098744, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934163 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098744, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934170 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098763, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.640029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934176 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098763, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.640029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934182 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098737, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5198097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934513 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098763, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.640029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934594 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098763, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.640029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934603 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098763, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.640029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934608 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098753, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934612 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098737, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5198097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934616 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098737, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5198097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934620 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099065, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6472452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.934660 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098737, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5198097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934668 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098761, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.527556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934674 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098737, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5198097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934681 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098737, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5198097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934687 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098753, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934693 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098753, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934705 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098755, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5247998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934716 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098753, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934728 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098748, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934734 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098753, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934742 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098761, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.527556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934748 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098761, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.527556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934752 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098753, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934760 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098761, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.527556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934768 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098755, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5247998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934776 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099060, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6430407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934780 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098748, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934785 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098761, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.527556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934789 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098761, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.527556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934793 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098755, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5247998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934801 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098744, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.934808 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098755, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5247998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934815 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099060, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6430407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934819 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098755, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5247998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934823 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098748, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934828 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098732, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5177941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934832 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098755, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5247998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934839 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098732, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5177941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934846 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099060, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6430407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934854 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098748, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934859 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098748, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934862 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099108, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6700294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934866 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099108, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6700294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934874 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098748, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934878 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099060, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6430407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934885 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099059, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6424553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934893 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098763, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.640029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.934899 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099060, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6430407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934903 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099059, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6424553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934906 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098742, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5202503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934915 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098732, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5177941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934919 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099060, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6430407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934928 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098732, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5177941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934937 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098734, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.518099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934942 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098742, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5202503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934946 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098759, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934950 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099108, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6700294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934958 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098732, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5177941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934962 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099108, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6700294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934969 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098732, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5177941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934977 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098734, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.518099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934982 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098756, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934986 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099059, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6424553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.934996 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099108, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6700294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935000 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099108, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6700294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935004 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098737, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5198097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935012 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099059, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6424553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935020 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098742, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5202503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935025 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099086, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.669874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935029 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098759, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935038 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.935043 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099059, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6424553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935047 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099059, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6424553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935051 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098756, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935058 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098742, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5202503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935066 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098742, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5202503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935070 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098734, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.518099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935077 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098734, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.518099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935093 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098742, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5202503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935100 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098759, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935107 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099086, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.669874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935113 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.935125 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098734, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.518099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935137 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098759, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935145 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098756, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935160 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098756, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935168 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098734, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.518099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935175 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098759, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935184 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099086, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.669874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935192 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.935205 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099086, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.669874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935213 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.935226 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098753, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5235777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935236 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098759, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935250 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098756, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935257 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098756, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935264 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099086, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.669874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935271 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.935277 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099086, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.669874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:04:07.935284 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.935293 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098761, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.527556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935305 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098755, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5247998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935317 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098748, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5224793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099060, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6430407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935330 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098732, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5177941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935336 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099108, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6700294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935343 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099059, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.6424553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935353 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098742, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5202503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935364 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098734, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.518099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935380 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098759, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935394 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098756, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.5260282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935403 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099086, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.669874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:04:07.935410 | orchestrator | 2026-01-05 01:04:07.935417 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-05 01:04:07.935429 | orchestrator | Monday 05 January 2026 01:01:40 +0000 (0:00:35.312) 0:01:03.259 ******** 2026-01-05 01:04:07.935436 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:04:07.935443 | orchestrator | 2026-01-05 01:04:07.935450 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-05 01:04:07.935456 | orchestrator | Monday 05 January 2026 01:01:41 +0000 (0:00:00.978) 0:01:04.237 ******** 2026-01-05 01:04:07.935461 | orchestrator | [WARNING]: Skipped 2026-01-05 01:04:07.935468 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935475 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-05 01:04:07.935481 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935487 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-05 01:04:07.935494 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:04:07.935500 | orchestrator | [WARNING]: Skipped 2026-01-05 01:04:07.935507 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935513 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-05 01:04:07.935520 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935526 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-05 01:04:07.935533 | orchestrator | [WARNING]: Skipped 2026-01-05 01:04:07.935539 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935595 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-05 01:04:07.935601 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935605 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-05 01:04:07.935620 | orchestrator | [WARNING]: Skipped 2026-01-05 01:04:07.935624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935628 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-05 01:04:07.935632 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935636 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-05 01:04:07.935645 | orchestrator | [WARNING]: Skipped 2026-01-05 01:04:07.935649 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935653 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-05 01:04:07.935663 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935668 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-05 01:04:07.935672 | orchestrator | [WARNING]: Skipped 2026-01-05 01:04:07.935677 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935680 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-05 01:04:07.935684 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935688 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-05 01:04:07.935692 | orchestrator | [WARNING]: Skipped 2026-01-05 01:04:07.935696 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935700 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-05 01:04:07.935705 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:04:07.935709 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-05 01:04:07.935713 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 01:04:07.935717 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:04:07.935721 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 01:04:07.935725 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 01:04:07.935730 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 01:04:07.935733 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 01:04:07.935738 | orchestrator | 2026-01-05 01:04:07.935742 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-05 01:04:07.935746 | orchestrator | Monday 05 January 2026 01:01:45 +0000 (0:00:03.826) 0:01:08.064 ******** 2026-01-05 01:04:07.935751 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:04:07.935755 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.935759 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:04:07.935763 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.935767 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:04:07.935770 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.935774 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:04:07.935778 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.935782 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:04:07.935786 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.935790 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:04:07.935794 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.935798 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-05 01:04:07.935802 | orchestrator | 2026-01-05 01:04:07.935806 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-05 01:04:07.935810 | orchestrator | Monday 05 January 2026 01:02:02 +0000 (0:00:17.423) 0:01:25.487 ******** 2026-01-05 01:04:07.935819 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:04:07.935823 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.935826 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:04:07.935831 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.935834 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:04:07.935839 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.935842 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:04:07.935846 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.935850 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:04:07.935854 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.935858 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:04:07.935862 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.935866 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-05 01:04:07.935870 | orchestrator | 2026-01-05 01:04:07.935874 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-05 01:04:07.935878 | orchestrator | Monday 05 January 2026 01:02:06 +0000 (0:00:03.925) 0:01:29.413 ******** 2026-01-05 01:04:07.935882 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:04:07.935886 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:04:07.935890 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.935894 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.935902 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:04:07.935906 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.935913 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:04:07.935917 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.935921 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-05 01:04:07.935925 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:04:07.935929 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.935933 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:04:07.935937 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.935941 | orchestrator | 2026-01-05 01:04:07.935945 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-05 01:04:07.935949 | orchestrator | Monday 05 January 2026 01:02:08 +0000 (0:00:02.234) 0:01:31.648 ******** 2026-01-05 01:04:07.935952 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:04:07.935956 | orchestrator | 2026-01-05 01:04:07.935960 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-05 01:04:07.935964 | orchestrator | Monday 05 January 2026 01:02:09 +0000 (0:00:00.849) 0:01:32.498 ******** 2026-01-05 01:04:07.935968 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:04:07.935975 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.935982 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.935992 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.936007 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.936015 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.936021 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.936028 | orchestrator | 2026-01-05 01:04:07.936034 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-05 01:04:07.936040 | orchestrator | Monday 05 January 2026 01:02:10 +0000 (0:00:00.795) 0:01:33.293 ******** 2026-01-05 01:04:07.936046 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:04:07.936053 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.936060 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.936067 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:07.936073 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.936081 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:07.936088 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:07.936095 | orchestrator | 2026-01-05 01:04:07.936103 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-05 01:04:07.936111 | orchestrator | Monday 05 January 2026 01:02:12 +0000 (0:00:02.379) 0:01:35.673 ******** 2026-01-05 01:04:07.936118 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:04:07.936125 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.936132 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:04:07.936139 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.936143 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:04:07.936147 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:04:07.936151 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:04:07.936154 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.936158 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:04:07.936162 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.936166 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:04:07.936170 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.936173 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:04:07.936177 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.936181 | orchestrator | 2026-01-05 01:04:07.936185 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-05 01:04:07.936189 | orchestrator | Monday 05 January 2026 01:02:14 +0000 (0:00:01.736) 0:01:37.410 ******** 2026-01-05 01:04:07.936193 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:04:07.936197 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:04:07.936201 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.936205 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.936208 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:04:07.936213 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.936216 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:04:07.936220 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-05 01:04:07.936224 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.936228 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:04:07.936232 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.936239 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:04:07.936249 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.936253 | orchestrator | 2026-01-05 01:04:07.936257 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-05 01:04:07.936265 | orchestrator | Monday 05 January 2026 01:02:16 +0000 (0:00:01.609) 0:01:39.019 ******** 2026-01-05 01:04:07.936269 | orchestrator | [WARNING]: Skipped 2026-01-05 01:04:07.936273 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-05 01:04:07.936277 | orchestrator | due to this access issue: 2026-01-05 01:04:07.936281 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-05 01:04:07.936285 | orchestrator | not a directory 2026-01-05 01:04:07.936288 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:04:07.936292 | orchestrator | 2026-01-05 01:04:07.936296 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-05 01:04:07.936300 | orchestrator | Monday 05 January 2026 01:02:17 +0000 (0:00:01.496) 0:01:40.516 ******** 2026-01-05 01:04:07.936303 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:04:07.936307 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.936311 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.936315 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.936319 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.936322 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.936326 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.936330 | orchestrator | 2026-01-05 01:04:07.936334 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-05 01:04:07.936337 | orchestrator | Monday 05 January 2026 01:02:18 +0000 (0:00:00.851) 0:01:41.367 ******** 2026-01-05 01:04:07.936341 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:04:07.936345 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:07.936349 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:07.936353 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:07.936356 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:07.936360 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:07.936364 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:07.936368 | orchestrator | 2026-01-05 01:04:07.936372 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-05 01:04:07.936376 | orchestrator | Monday 05 January 2026 01:02:19 +0000 (0:00:00.747) 0:01:42.114 ******** 2026-01-05 01:04:07.936381 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 01:04:07.936386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.936390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.936398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.936409 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.936413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.936417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.936422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:04:07.936444 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936466 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 01:04:07.936474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936483 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936502 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:04:07.936573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:04:07.936590 | orchestrator | 2026-01-05 01:04:07.936594 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-05 01:04:07.936598 | orchestrator | Monday 05 January 2026 01:02:23 +0000 (0:00:04.382) 0:01:46.497 ******** 2026-01-05 01:04:07.936602 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 01:04:07.936606 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:04:07.936610 | orchestrator | 2026-01-05 01:04:07.936613 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:04:07.936617 | orchestrator | Monday 05 January 2026 01:02:24 +0000 (0:00:01.271) 0:01:47.769 ******** 2026-01-05 01:04:07.936621 | orchestrator | 2026-01-05 01:04:07.936625 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:04:07.936629 | orchestrator | Monday 05 January 2026 01:02:24 +0000 (0:00:00.071) 0:01:47.840 ******** 2026-01-05 01:04:07.936637 | orchestrator | 2026-01-05 01:04:07.936641 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:04:07.936645 | orchestrator | Monday 05 January 2026 01:02:25 +0000 (0:00:00.068) 0:01:47.908 ******** 2026-01-05 01:04:07.936649 | orchestrator | 2026-01-05 01:04:07.936653 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:04:07.936657 | orchestrator | Monday 05 January 2026 01:02:25 +0000 (0:00:00.070) 0:01:47.979 ******** 2026-01-05 01:04:07.936660 | orchestrator | 2026-01-05 01:04:07.936664 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:04:07.936668 | orchestrator | Monday 05 January 2026 01:02:25 +0000 (0:00:00.283) 0:01:48.262 ******** 2026-01-05 01:04:07.936672 | orchestrator | 2026-01-05 01:04:07.936676 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:04:07.936680 | orchestrator | Monday 05 January 2026 01:02:25 +0000 (0:00:00.075) 0:01:48.338 ******** 2026-01-05 01:04:07.936683 | orchestrator | 2026-01-05 01:04:07.936687 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:04:07.936691 | orchestrator | Monday 05 January 2026 01:02:25 +0000 (0:00:00.081) 0:01:48.419 ******** 2026-01-05 01:04:07.936695 | orchestrator | 2026-01-05 01:04:07.936699 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-05 01:04:07.936703 | orchestrator | Monday 05 January 2026 01:02:25 +0000 (0:00:00.093) 0:01:48.513 ******** 2026-01-05 01:04:07.936707 | orchestrator | changed: [testbed-manager] 2026-01-05 01:04:07.936711 | orchestrator | 2026-01-05 01:04:07.936714 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-05 01:04:07.936718 | orchestrator | Monday 05 January 2026 01:02:50 +0000 (0:00:24.489) 0:02:13.003 ******** 2026-01-05 01:04:07.936722 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:04:07.936727 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:04:07.936730 | orchestrator | changed: [testbed-manager] 2026-01-05 01:04:07.936734 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:04:07.936738 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:07.936742 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:07.936746 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:07.936750 | orchestrator | 2026-01-05 01:04:07.936754 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-05 01:04:07.936758 | orchestrator | Monday 05 January 2026 01:03:02 +0000 (0:00:12.372) 0:02:25.375 ******** 2026-01-05 01:04:07.936762 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:07.936765 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:07.936770 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:07.936773 | orchestrator | 2026-01-05 01:04:07.936777 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-05 01:04:07.936781 | orchestrator | Monday 05 January 2026 01:03:08 +0000 (0:00:05.854) 0:02:31.229 ******** 2026-01-05 01:04:07.936785 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:07.936789 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:07.936793 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:07.936797 | orchestrator | 2026-01-05 01:04:07.936801 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-05 01:04:07.936805 | orchestrator | Monday 05 January 2026 01:03:18 +0000 (0:00:10.514) 0:02:41.744 ******** 2026-01-05 01:04:07.936811 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:07.936815 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:04:07.936819 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:07.936823 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:07.936827 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:04:07.936835 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:04:07.936839 | orchestrator | changed: [testbed-manager] 2026-01-05 01:04:07.936842 | orchestrator | 2026-01-05 01:04:07.936846 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-05 01:04:07.936850 | orchestrator | Monday 05 January 2026 01:03:34 +0000 (0:00:15.133) 0:02:56.878 ******** 2026-01-05 01:04:07.936858 | orchestrator | changed: [testbed-manager] 2026-01-05 01:04:07.936865 | orchestrator | 2026-01-05 01:04:07.936869 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-05 01:04:07.936873 | orchestrator | Monday 05 January 2026 01:03:41 +0000 (0:00:07.817) 0:03:04.695 ******** 2026-01-05 01:04:07.936877 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:07.936881 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:07.936885 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:07.936889 | orchestrator | 2026-01-05 01:04:07.936893 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-05 01:04:07.936897 | orchestrator | Monday 05 January 2026 01:03:53 +0000 (0:00:11.433) 0:03:16.129 ******** 2026-01-05 01:04:07.936901 | orchestrator | changed: [testbed-manager] 2026-01-05 01:04:07.936904 | orchestrator | 2026-01-05 01:04:07.936908 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-05 01:04:07.936913 | orchestrator | Monday 05 January 2026 01:03:58 +0000 (0:00:05.077) 0:03:21.207 ******** 2026-01-05 01:04:07.936916 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:04:07.936920 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:04:07.936924 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:04:07.936928 | orchestrator | 2026-01-05 01:04:07.936932 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:04:07.936936 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-05 01:04:07.936941 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:04:07.936945 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:04:07.936949 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:04:07.936953 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:04:07.936957 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:04:07.936961 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:04:07.936965 | orchestrator | 2026-01-05 01:04:07.936969 | orchestrator | 2026-01-05 01:04:07.936973 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:04:07.936977 | orchestrator | Monday 05 January 2026 01:04:05 +0000 (0:00:06.686) 0:03:27.893 ******** 2026-01-05 01:04:07.936981 | orchestrator | =============================================================================== 2026-01-05 01:04:07.936985 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 35.31s 2026-01-05 01:04:07.936988 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 24.49s 2026-01-05 01:04:07.936992 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.42s 2026-01-05 01:04:07.936997 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.13s 2026-01-05 01:04:07.937000 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.37s 2026-01-05 01:04:07.937005 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.43s 2026-01-05 01:04:07.937008 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.51s 2026-01-05 01:04:07.937013 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.82s 2026-01-05 01:04:07.937021 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.69s 2026-01-05 01:04:07.937025 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.25s 2026-01-05 01:04:07.937029 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.08s 2026-01-05 01:04:07.937032 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.85s 2026-01-05 01:04:07.937037 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.08s 2026-01-05 01:04:07.937041 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.38s 2026-01-05 01:04:07.937045 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.08s 2026-01-05 01:04:07.937049 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.93s 2026-01-05 01:04:07.937053 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.83s 2026-01-05 01:04:07.937060 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.38s 2026-01-05 01:04:07.937065 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.30s 2026-01-05 01:04:07.937068 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.23s 2026-01-05 01:04:07.937075 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:07.937079 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:07.937084 | orchestrator | 2026-01-05 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:10.988688 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:10.990088 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:10.991810 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:10.993631 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:10.993833 | orchestrator | 2026-01-05 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:14.052777 | orchestrator | 2026-01-05 01:04:14 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:14.056335 | orchestrator | 2026-01-05 01:04:14 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:14.059303 | orchestrator | 2026-01-05 01:04:14 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:14.061897 | orchestrator | 2026-01-05 01:04:14 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:14.061958 | orchestrator | 2026-01-05 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:17.101766 | orchestrator | 2026-01-05 01:04:17 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:17.103055 | orchestrator | 2026-01-05 01:04:17 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:17.104857 | orchestrator | 2026-01-05 01:04:17 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:17.106983 | orchestrator | 2026-01-05 01:04:17 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:17.107235 | orchestrator | 2026-01-05 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:20.148942 | orchestrator | 2026-01-05 01:04:20 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:20.149905 | orchestrator | 2026-01-05 01:04:20 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:20.155245 | orchestrator | 2026-01-05 01:04:20 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:20.158841 | orchestrator | 2026-01-05 01:04:20 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:20.159119 | orchestrator | 2026-01-05 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:23.195970 | orchestrator | 2026-01-05 01:04:23 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:23.197316 | orchestrator | 2026-01-05 01:04:23 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:23.199212 | orchestrator | 2026-01-05 01:04:23 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:23.199990 | orchestrator | 2026-01-05 01:04:23 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:23.200016 | orchestrator | 2026-01-05 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:26.244117 | orchestrator | 2026-01-05 01:04:26 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:26.245818 | orchestrator | 2026-01-05 01:04:26 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:26.248648 | orchestrator | 2026-01-05 01:04:26 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:26.250005 | orchestrator | 2026-01-05 01:04:26 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:26.250088 | orchestrator | 2026-01-05 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:29.296158 | orchestrator | 2026-01-05 01:04:29 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:29.296297 | orchestrator | 2026-01-05 01:04:29 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:29.296319 | orchestrator | 2026-01-05 01:04:29 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:29.296787 | orchestrator | 2026-01-05 01:04:29 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:29.296829 | orchestrator | 2026-01-05 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:32.390357 | orchestrator | 2026-01-05 01:04:32 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:32.390492 | orchestrator | 2026-01-05 01:04:32 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:32.390595 | orchestrator | 2026-01-05 01:04:32 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:32.390615 | orchestrator | 2026-01-05 01:04:32 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:32.390633 | orchestrator | 2026-01-05 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:35.410107 | orchestrator | 2026-01-05 01:04:35 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:35.412259 | orchestrator | 2026-01-05 01:04:35 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:35.413014 | orchestrator | 2026-01-05 01:04:35 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:35.414128 | orchestrator | 2026-01-05 01:04:35 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:35.414163 | orchestrator | 2026-01-05 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:38.440479 | orchestrator | 2026-01-05 01:04:38 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:38.441393 | orchestrator | 2026-01-05 01:04:38 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:38.451049 | orchestrator | 2026-01-05 01:04:38 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:38.452151 | orchestrator | 2026-01-05 01:04:38 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:38.452192 | orchestrator | 2026-01-05 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:41.480006 | orchestrator | 2026-01-05 01:04:41 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:41.480733 | orchestrator | 2026-01-05 01:04:41 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:41.481395 | orchestrator | 2026-01-05 01:04:41 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:41.482198 | orchestrator | 2026-01-05 01:04:41 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:41.482230 | orchestrator | 2026-01-05 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:44.516709 | orchestrator | 2026-01-05 01:04:44 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:44.516782 | orchestrator | 2026-01-05 01:04:44 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:44.518830 | orchestrator | 2026-01-05 01:04:44 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:44.519443 | orchestrator | 2026-01-05 01:04:44 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:44.519468 | orchestrator | 2026-01-05 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:47.550937 | orchestrator | 2026-01-05 01:04:47 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:47.551860 | orchestrator | 2026-01-05 01:04:47 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:47.553102 | orchestrator | 2026-01-05 01:04:47 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:47.553997 | orchestrator | 2026-01-05 01:04:47 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:47.554141 | orchestrator | 2026-01-05 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:50.617595 | orchestrator | 2026-01-05 01:04:50 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:50.618250 | orchestrator | 2026-01-05 01:04:50 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:50.618829 | orchestrator | 2026-01-05 01:04:50 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:50.619828 | orchestrator | 2026-01-05 01:04:50 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:50.619879 | orchestrator | 2026-01-05 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:53.654781 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:53.655073 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:53.657334 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:53.658162 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:53.658226 | orchestrator | 2026-01-05 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:56.688827 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:56.688914 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:56.688923 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:56.688930 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:56.688936 | orchestrator | 2026-01-05 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:59.712304 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:04:59.712767 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:04:59.714181 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:04:59.717965 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:04:59.718117 | orchestrator | 2026-01-05 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:02.767108 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:02.767819 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:02.768842 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:02.770753 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:02.770790 | orchestrator | 2026-01-05 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:05.809954 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:05.810628 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:05.811517 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:05.812324 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:05.812349 | orchestrator | 2026-01-05 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:08.854580 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:08.854997 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:08.855876 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:08.857004 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:08.857082 | orchestrator | 2026-01-05 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:11.929567 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:11.932002 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:11.932238 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:11.933165 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:11.933250 | orchestrator | 2026-01-05 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:14.958573 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:14.958674 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:14.958751 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:14.959091 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:14.959108 | orchestrator | 2026-01-05 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:17.989642 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:17.989783 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:17.990518 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:17.991025 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:17.991061 | orchestrator | 2026-01-05 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:21.009233 | orchestrator | 2026-01-05 01:05:21 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:21.010070 | orchestrator | 2026-01-05 01:05:21 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:21.010610 | orchestrator | 2026-01-05 01:05:21 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:21.011637 | orchestrator | 2026-01-05 01:05:21 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:21.011695 | orchestrator | 2026-01-05 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:24.046144 | orchestrator | 2026-01-05 01:05:24 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:24.046431 | orchestrator | 2026-01-05 01:05:24 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:24.047185 | orchestrator | 2026-01-05 01:05:24 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:24.047831 | orchestrator | 2026-01-05 01:05:24 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:24.047885 | orchestrator | 2026-01-05 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:27.086286 | orchestrator | 2026-01-05 01:05:27 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:27.098291 | orchestrator | 2026-01-05 01:05:27 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:27.106337 | orchestrator | 2026-01-05 01:05:27 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:27.108500 | orchestrator | 2026-01-05 01:05:27 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:27.108557 | orchestrator | 2026-01-05 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:30.129207 | orchestrator | 2026-01-05 01:05:30 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:30.129478 | orchestrator | 2026-01-05 01:05:30 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:30.129964 | orchestrator | 2026-01-05 01:05:30 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:30.131313 | orchestrator | 2026-01-05 01:05:30 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:30.131363 | orchestrator | 2026-01-05 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:33.162567 | orchestrator | 2026-01-05 01:05:33 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:33.162688 | orchestrator | 2026-01-05 01:05:33 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:33.162829 | orchestrator | 2026-01-05 01:05:33 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:33.163231 | orchestrator | 2026-01-05 01:05:33 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:33.163248 | orchestrator | 2026-01-05 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:36.189160 | orchestrator | 2026-01-05 01:05:36 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:36.189344 | orchestrator | 2026-01-05 01:05:36 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:36.190101 | orchestrator | 2026-01-05 01:05:36 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:36.191710 | orchestrator | 2026-01-05 01:05:36 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:36.191757 | orchestrator | 2026-01-05 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:39.227653 | orchestrator | 2026-01-05 01:05:39 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:39.228417 | orchestrator | 2026-01-05 01:05:39 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:39.229664 | orchestrator | 2026-01-05 01:05:39 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:39.230315 | orchestrator | 2026-01-05 01:05:39 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:39.230347 | orchestrator | 2026-01-05 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:42.275192 | orchestrator | 2026-01-05 01:05:42 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:42.275622 | orchestrator | 2026-01-05 01:05:42 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:42.276397 | orchestrator | 2026-01-05 01:05:42 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:42.276979 | orchestrator | 2026-01-05 01:05:42 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:42.277059 | orchestrator | 2026-01-05 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:45.355691 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:45.355911 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:45.356550 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:45.357450 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:45.357543 | orchestrator | 2026-01-05 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:48.384874 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:48.385070 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:48.386168 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:48.387063 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:48.387098 | orchestrator | 2026-01-05 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:51.424483 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:51.427741 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:51.428510 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:51.430132 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:51.430178 | orchestrator | 2026-01-05 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:54.470445 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:54.470637 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:54.471708 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:54.472307 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:54.472438 | orchestrator | 2026-01-05 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:57.507137 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:05:57.508973 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:05:57.510922 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:05:57.512391 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:05:57.512494 | orchestrator | 2026-01-05 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:00.557039 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:00.558311 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:00.558842 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:06:00.559746 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:00.559805 | orchestrator | 2026-01-05 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:03.616869 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:03.617944 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:03.618755 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:06:03.620707 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:03.620763 | orchestrator | 2026-01-05 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:06.645933 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:06.646629 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:06.647936 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:06:06.649987 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:06.650066 | orchestrator | 2026-01-05 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:09.684265 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:09.684590 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:09.685235 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:06:09.686134 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:09.686205 | orchestrator | 2026-01-05 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:12.783798 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:12.784952 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:12.788580 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state STARTED 2026-01-05 01:06:12.790348 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:12.790395 | orchestrator | 2026-01-05 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:15.821036 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:15.821418 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:15.822111 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:15.823843 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task 54ba5a12-6e2b-4fbe-acab-6d960cd69116 is in state SUCCESS 2026-01-05 01:06:15.825086 | orchestrator | 2026-01-05 01:06:15.826583 | orchestrator | 2026-01-05 01:06:15.826618 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:06:15.826629 | orchestrator | 2026-01-05 01:06:15.826638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:06:15.826648 | orchestrator | Monday 05 January 2026 01:03:56 +0000 (0:00:00.348) 0:00:00.348 ******** 2026-01-05 01:06:15.826657 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:06:15.826668 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:06:15.826677 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:06:15.826687 | orchestrator | 2026-01-05 01:06:15.826697 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:06:15.826706 | orchestrator | Monday 05 January 2026 01:03:56 +0000 (0:00:00.300) 0:00:00.648 ******** 2026-01-05 01:06:15.826716 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-05 01:06:15.826754 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-05 01:06:15.826764 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-05 01:06:15.826774 | orchestrator | 2026-01-05 01:06:15.826782 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-05 01:06:15.826790 | orchestrator | 2026-01-05 01:06:15.826800 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-05 01:06:15.826809 | orchestrator | Monday 05 January 2026 01:03:57 +0000 (0:00:00.575) 0:00:01.224 ******** 2026-01-05 01:06:15.826818 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:06:15.826828 | orchestrator | 2026-01-05 01:06:15.826837 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-05 01:06:15.826845 | orchestrator | Monday 05 January 2026 01:03:58 +0000 (0:00:00.587) 0:00:01.811 ******** 2026-01-05 01:06:15.826853 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-05 01:06:15.826858 | orchestrator | 2026-01-05 01:06:15.826864 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-05 01:06:15.826960 | orchestrator | Monday 05 January 2026 01:04:01 +0000 (0:00:03.502) 0:00:05.314 ******** 2026-01-05 01:06:15.826966 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-05 01:06:15.826972 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-05 01:06:15.826977 | orchestrator | 2026-01-05 01:06:15.826982 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-05 01:06:15.826987 | orchestrator | Monday 05 January 2026 01:04:08 +0000 (0:00:06.606) 0:00:11.921 ******** 2026-01-05 01:06:15.826993 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:06:15.826998 | orchestrator | 2026-01-05 01:06:15.827003 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-05 01:06:15.827008 | orchestrator | Monday 05 January 2026 01:04:11 +0000 (0:00:03.189) 0:00:15.110 ******** 2026-01-05 01:06:15.827014 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:06:15.827019 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-05 01:06:15.827024 | orchestrator | 2026-01-05 01:06:15.827029 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-05 01:06:15.827034 | orchestrator | Monday 05 January 2026 01:04:15 +0000 (0:00:03.977) 0:00:19.088 ******** 2026-01-05 01:06:15.827039 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:06:15.827045 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-05 01:06:15.827050 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-05 01:06:15.827055 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-05 01:06:15.827060 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-05 01:06:15.827065 | orchestrator | 2026-01-05 01:06:15.827070 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-05 01:06:15.827075 | orchestrator | Monday 05 January 2026 01:04:31 +0000 (0:00:15.889) 0:00:34.977 ******** 2026-01-05 01:06:15.827081 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-05 01:06:15.827086 | orchestrator | 2026-01-05 01:06:15.827091 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-05 01:06:15.827096 | orchestrator | Monday 05 January 2026 01:04:35 +0000 (0:00:03.815) 0:00:38.792 ******** 2026-01-05 01:06:15.827105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.827140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.827147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.827155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827227 | orchestrator | 2026-01-05 01:06:15.827234 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-05 01:06:15.827240 | orchestrator | Monday 05 January 2026 01:04:37 +0000 (0:00:02.165) 0:00:40.958 ******** 2026-01-05 01:06:15.827246 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-05 01:06:15.827252 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-05 01:06:15.827258 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-05 01:06:15.827265 | orchestrator | 2026-01-05 01:06:15.827271 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-05 01:06:15.827277 | orchestrator | Monday 05 January 2026 01:04:38 +0000 (0:00:01.588) 0:00:42.547 ******** 2026-01-05 01:06:15.827284 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:06:15.827290 | orchestrator | 2026-01-05 01:06:15.827299 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-05 01:06:15.827307 | orchestrator | Monday 05 January 2026 01:04:39 +0000 (0:00:00.268) 0:00:42.816 ******** 2026-01-05 01:06:15.827361 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:06:15.827369 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:06:15.827375 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:06:15.827382 | orchestrator | 2026-01-05 01:06:15.827388 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-05 01:06:15.827394 | orchestrator | Monday 05 January 2026 01:04:40 +0000 (0:00:01.130) 0:00:43.946 ******** 2026-01-05 01:06:15.827400 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:06:15.827426 | orchestrator | 2026-01-05 01:06:15.827432 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-05 01:06:15.827438 | orchestrator | Monday 05 January 2026 01:04:41 +0000 (0:00:00.918) 0:00:44.865 ******** 2026-01-05 01:06:15.827445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.827460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.827467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.827474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.827526 | orchestrator | 2026-01-05 01:06:15.827534 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-05 01:06:15.827543 | orchestrator | Monday 05 January 2026 01:04:44 +0000 (0:00:03.662) 0:00:48.528 ******** 2026-01-05 01:06:15.827552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:06:15.827567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827586 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:06:15.827604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:06:15.827611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827622 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:06:15.827627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:06:15.827642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827652 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:06:15.827657 | orchestrator | 2026-01-05 01:06:15.827699 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-05 01:06:15.827710 | orchestrator | Monday 05 January 2026 01:04:47 +0000 (0:00:02.539) 0:00:51.067 ******** 2026-01-05 01:06:15.827719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:06:15.827729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827799 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:06:15.827832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:06:15.827843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827870 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:06:15.827875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:06:15.827885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.827896 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:06:15.827901 | orchestrator | 2026-01-05 01:06:15.827906 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-05 01:06:15.827911 | orchestrator | Monday 05 January 2026 01:04:48 +0000 (0:00:00.924) 0:00:51.992 ******** 2026-01-05 01:06:15.827916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.828124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.828142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.828158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828206 | orchestrator | 2026-01-05 01:06:15.828211 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-05 01:06:15.828217 | orchestrator | Monday 05 January 2026 01:04:53 +0000 (0:00:05.341) 0:00:57.333 ******** 2026-01-05 01:06:15.828222 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:06:15.828227 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:06:15.828232 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:06:15.828237 | orchestrator | 2026-01-05 01:06:15.828242 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-05 01:06:15.828247 | orchestrator | Monday 05 January 2026 01:04:57 +0000 (0:00:03.518) 0:01:00.852 ******** 2026-01-05 01:06:15.828252 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:06:15.828257 | orchestrator | 2026-01-05 01:06:15.828262 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-05 01:06:15.828267 | orchestrator | Monday 05 January 2026 01:04:58 +0000 (0:00:01.408) 0:01:02.261 ******** 2026-01-05 01:06:15.828272 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:06:15.828277 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:06:15.828282 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:06:15.828287 | orchestrator | 2026-01-05 01:06:15.828292 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-05 01:06:15.828297 | orchestrator | Monday 05 January 2026 01:04:59 +0000 (0:00:01.146) 0:01:03.407 ******** 2026-01-05 01:06:15.828303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.828314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.828361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.828371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828433 | orchestrator | 2026-01-05 01:06:15.828439 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-05 01:06:15.828444 | orchestrator | Monday 05 January 2026 01:05:12 +0000 (0:00:13.051) 0:01:16.459 ******** 2026-01-05 01:06:15.828449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:06:15.828454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.828460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.828466 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:06:15.828477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:06:15.828486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.828492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.828497 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:06:15.828502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:06:15.828508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.828513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:06:15.828518 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:06:15.828523 | orchestrator | 2026-01-05 01:06:15.828529 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-05 01:06:15.828534 | orchestrator | Monday 05 January 2026 01:05:13 +0000 (0:00:01.103) 0:01:17.562 ******** 2026-01-05 01:06:15.828548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.828558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.828563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:06:15.828568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:06:15.828611 | orchestrator | 2026-01-05 01:06:15.828616 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-05 01:06:15.828621 | orchestrator | Monday 05 January 2026 01:05:18 +0000 (0:00:04.613) 0:01:22.176 ******** 2026-01-05 01:06:15.828626 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:06:15.828631 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:06:15.828636 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:06:15.828641 | orchestrator | 2026-01-05 01:06:15.828646 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-05 01:06:15.828651 | orchestrator | Monday 05 January 2026 01:05:19 +0000 (0:00:00.745) 0:01:22.922 ******** 2026-01-05 01:06:15.828656 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:06:15.828661 | orchestrator | 2026-01-05 01:06:15.828666 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-05 01:06:15.828672 | orchestrator | Monday 05 January 2026 01:05:21 +0000 (0:00:02.348) 0:01:25.270 ******** 2026-01-05 01:06:15.828676 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:06:15.828682 | orchestrator | 2026-01-05 01:06:15.828687 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-05 01:06:15.828692 | orchestrator | Monday 05 January 2026 01:05:24 +0000 (0:00:02.512) 0:01:27.782 ******** 2026-01-05 01:06:15.828701 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:06:15.828706 | orchestrator | 2026-01-05 01:06:15.828711 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-05 01:06:15.828716 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:12.320) 0:01:40.103 ******** 2026-01-05 01:06:15.828721 | orchestrator | 2026-01-05 01:06:15.828726 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-05 01:06:15.828731 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:00.063) 0:01:40.167 ******** 2026-01-05 01:06:15.828736 | orchestrator | 2026-01-05 01:06:15.828743 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-05 01:06:15.828750 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:00.067) 0:01:40.234 ******** 2026-01-05 01:06:15.828755 | orchestrator | 2026-01-05 01:06:15.828761 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-05 01:06:15.828767 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:00.075) 0:01:40.309 ******** 2026-01-05 01:06:15.828773 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:06:15.828779 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:06:15.828785 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:06:15.828791 | orchestrator | 2026-01-05 01:06:15.828797 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-05 01:06:15.828803 | orchestrator | Monday 05 January 2026 01:05:47 +0000 (0:00:10.616) 0:01:50.925 ******** 2026-01-05 01:06:15.828808 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:06:15.828818 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:06:15.828827 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:06:15.828833 | orchestrator | 2026-01-05 01:06:15.828839 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-05 01:06:15.828845 | orchestrator | Monday 05 January 2026 01:06:00 +0000 (0:00:13.377) 0:02:04.303 ******** 2026-01-05 01:06:15.828851 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:06:15.828857 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:06:15.828863 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:06:15.828869 | orchestrator | 2026-01-05 01:06:15.828875 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:06:15.828900 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:06:15.828908 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 01:06:15.828915 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 01:06:15.828921 | orchestrator | 2026-01-05 01:06:15.828927 | orchestrator | 2026-01-05 01:06:15.828933 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:06:15.828939 | orchestrator | Monday 05 January 2026 01:06:12 +0000 (0:00:12.276) 0:02:16.579 ******** 2026-01-05 01:06:15.828945 | orchestrator | =============================================================================== 2026-01-05 01:06:15.828951 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.89s 2026-01-05 01:06:15.828957 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 13.38s 2026-01-05 01:06:15.828963 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.05s 2026-01-05 01:06:15.828969 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.32s 2026-01-05 01:06:15.828975 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.28s 2026-01-05 01:06:15.828981 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.62s 2026-01-05 01:06:15.828988 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.61s 2026-01-05 01:06:15.828994 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.34s 2026-01-05 01:06:15.829006 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.61s 2026-01-05 01:06:15.829015 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.98s 2026-01-05 01:06:15.829025 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.82s 2026-01-05 01:06:15.829034 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.66s 2026-01-05 01:06:15.829044 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.52s 2026-01-05 01:06:15.829053 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.50s 2026-01-05 01:06:15.829063 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.19s 2026-01-05 01:06:15.829073 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.54s 2026-01-05 01:06:15.829082 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.51s 2026-01-05 01:06:15.829092 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.35s 2026-01-05 01:06:15.829100 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.17s 2026-01-05 01:06:15.829109 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.59s 2026-01-05 01:06:15.829135 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:15.829145 | orchestrator | 2026-01-05 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:18.856175 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:18.858151 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:18.858913 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:18.859875 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:18.859905 | orchestrator | 2026-01-05 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:21.897182 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:21.897733 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:21.898896 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:21.899563 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:21.899701 | orchestrator | 2026-01-05 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:24.941680 | orchestrator | 2026-01-05 01:06:24 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:24.942385 | orchestrator | 2026-01-05 01:06:24 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:24.943405 | orchestrator | 2026-01-05 01:06:24 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:24.944382 | orchestrator | 2026-01-05 01:06:24 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:24.944430 | orchestrator | 2026-01-05 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:27.979471 | orchestrator | 2026-01-05 01:06:27 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:27.979761 | orchestrator | 2026-01-05 01:06:27 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:27.980583 | orchestrator | 2026-01-05 01:06:27 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:27.983688 | orchestrator | 2026-01-05 01:06:27 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:27.983736 | orchestrator | 2026-01-05 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:31.022571 | orchestrator | 2026-01-05 01:06:31 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:31.023144 | orchestrator | 2026-01-05 01:06:31 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:31.024902 | orchestrator | 2026-01-05 01:06:31 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:31.027173 | orchestrator | 2026-01-05 01:06:31 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:31.027216 | orchestrator | 2026-01-05 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:34.074435 | orchestrator | 2026-01-05 01:06:34 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:34.075251 | orchestrator | 2026-01-05 01:06:34 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:34.075643 | orchestrator | 2026-01-05 01:06:34 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:34.076265 | orchestrator | 2026-01-05 01:06:34 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:34.076343 | orchestrator | 2026-01-05 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:37.104687 | orchestrator | 2026-01-05 01:06:37 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:37.104804 | orchestrator | 2026-01-05 01:06:37 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:37.105564 | orchestrator | 2026-01-05 01:06:37 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:37.106309 | orchestrator | 2026-01-05 01:06:37 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:37.106367 | orchestrator | 2026-01-05 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:40.139640 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:40.141056 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:40.142714 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:40.144478 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:40.144511 | orchestrator | 2026-01-05 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:43.178440 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:43.180948 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:43.183374 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:43.184651 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:43.185391 | orchestrator | 2026-01-05 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:46.257702 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:46.258181 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:46.259584 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:46.260303 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:46.260339 | orchestrator | 2026-01-05 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:49.300964 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:49.303039 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:49.304587 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:49.307051 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:49.307161 | orchestrator | 2026-01-05 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:52.339668 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:52.339956 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:52.341847 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:52.342662 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:52.342718 | orchestrator | 2026-01-05 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:55.393286 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:55.395300 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:55.398430 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:55.399768 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:55.399834 | orchestrator | 2026-01-05 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:58.454846 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:06:58.456153 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:06:58.459299 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:06:58.459374 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:06:58.459419 | orchestrator | 2026-01-05 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:01.498535 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:07:01.498699 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:01.499784 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:01.502142 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:01.502203 | orchestrator | 2026-01-05 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:04.541037 | orchestrator | 2026-01-05 01:07:04 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:07:04.541146 | orchestrator | 2026-01-05 01:07:04 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:04.541156 | orchestrator | 2026-01-05 01:07:04 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:04.541163 | orchestrator | 2026-01-05 01:07:04 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:04.541173 | orchestrator | 2026-01-05 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:07.579794 | orchestrator | 2026-01-05 01:07:07 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:07:07.579976 | orchestrator | 2026-01-05 01:07:07 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:07.580649 | orchestrator | 2026-01-05 01:07:07 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:07.581450 | orchestrator | 2026-01-05 01:07:07 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:07.581551 | orchestrator | 2026-01-05 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:10.615324 | orchestrator | 2026-01-05 01:07:10 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:07:10.616385 | orchestrator | 2026-01-05 01:07:10 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:10.616783 | orchestrator | 2026-01-05 01:07:10 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:10.617536 | orchestrator | 2026-01-05 01:07:10 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:10.617568 | orchestrator | 2026-01-05 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:13.655831 | orchestrator | 2026-01-05 01:07:13 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:07:13.655916 | orchestrator | 2026-01-05 01:07:13 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:13.658464 | orchestrator | 2026-01-05 01:07:13 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:13.658560 | orchestrator | 2026-01-05 01:07:13 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:13.658576 | orchestrator | 2026-01-05 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:16.746630 | orchestrator | 2026-01-05 01:07:16 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:07:16.748992 | orchestrator | 2026-01-05 01:07:16 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:16.750804 | orchestrator | 2026-01-05 01:07:16 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:16.752272 | orchestrator | 2026-01-05 01:07:16 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:16.752348 | orchestrator | 2026-01-05 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:19.799934 | orchestrator | 2026-01-05 01:07:19 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:07:19.800015 | orchestrator | 2026-01-05 01:07:19 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:19.800021 | orchestrator | 2026-01-05 01:07:19 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:19.800127 | orchestrator | 2026-01-05 01:07:19 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:19.800135 | orchestrator | 2026-01-05 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:22.843945 | orchestrator | 2026-01-05 01:07:22 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:07:22.846258 | orchestrator | 2026-01-05 01:07:22 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:22.850416 | orchestrator | 2026-01-05 01:07:22 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:22.852745 | orchestrator | 2026-01-05 01:07:22 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:22.852792 | orchestrator | 2026-01-05 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:25.881526 | orchestrator | 2026-01-05 01:07:25 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state STARTED 2026-01-05 01:07:25.881737 | orchestrator | 2026-01-05 01:07:25 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:25.882439 | orchestrator | 2026-01-05 01:07:25 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:25.883373 | orchestrator | 2026-01-05 01:07:25 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:25.883407 | orchestrator | 2026-01-05 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:28.918674 | orchestrator | 2026-01-05 01:07:28 | INFO  | Task f724cabd-0595-436f-b027-a649ea816f04 is in state SUCCESS 2026-01-05 01:07:28.920233 | orchestrator | 2026-01-05 01:07:28.920281 | orchestrator | 2026-01-05 01:07:28.920298 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:07:28.920304 | orchestrator | 2026-01-05 01:07:28.920308 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:07:28.920314 | orchestrator | Monday 05 January 2026 01:04:10 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-01-05 01:07:28.920318 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:07:28.920323 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:07:28.920327 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:07:28.920331 | orchestrator | 2026-01-05 01:07:28.920336 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:07:28.920340 | orchestrator | Monday 05 January 2026 01:04:10 +0000 (0:00:00.341) 0:00:00.620 ******** 2026-01-05 01:07:28.920344 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-05 01:07:28.920349 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-05 01:07:28.920353 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-05 01:07:28.920356 | orchestrator | 2026-01-05 01:07:28.920360 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-05 01:07:28.920364 | orchestrator | 2026-01-05 01:07:28.920368 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 01:07:28.920372 | orchestrator | Monday 05 January 2026 01:04:11 +0000 (0:00:00.436) 0:00:01.057 ******** 2026-01-05 01:07:28.920376 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:07:28.920380 | orchestrator | 2026-01-05 01:07:28.920384 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-05 01:07:28.920388 | orchestrator | Monday 05 January 2026 01:04:11 +0000 (0:00:00.589) 0:00:01.646 ******** 2026-01-05 01:07:28.920392 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-05 01:07:28.920396 | orchestrator | 2026-01-05 01:07:28.920400 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-05 01:07:28.920422 | orchestrator | Monday 05 January 2026 01:04:14 +0000 (0:00:03.334) 0:00:04.981 ******** 2026-01-05 01:07:28.920427 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-05 01:07:28.920434 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-05 01:07:28.920440 | orchestrator | 2026-01-05 01:07:28.920447 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-05 01:07:28.920454 | orchestrator | Monday 05 January 2026 01:04:21 +0000 (0:00:06.433) 0:00:11.414 ******** 2026-01-05 01:07:28.920464 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:07:28.920471 | orchestrator | 2026-01-05 01:07:28.920477 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-05 01:07:28.920484 | orchestrator | Monday 05 January 2026 01:04:24 +0000 (0:00:03.394) 0:00:14.808 ******** 2026-01-05 01:07:28.920491 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:07:28.920496 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-05 01:07:28.920500 | orchestrator | 2026-01-05 01:07:28.920504 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-05 01:07:28.920508 | orchestrator | Monday 05 January 2026 01:04:28 +0000 (0:00:03.855) 0:00:18.664 ******** 2026-01-05 01:07:28.920512 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:07:28.920515 | orchestrator | 2026-01-05 01:07:28.920519 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-05 01:07:28.920523 | orchestrator | Monday 05 January 2026 01:04:32 +0000 (0:00:03.778) 0:00:22.442 ******** 2026-01-05 01:07:28.920527 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-05 01:07:28.920530 | orchestrator | 2026-01-05 01:07:28.920534 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-05 01:07:28.920538 | orchestrator | Monday 05 January 2026 01:04:36 +0000 (0:00:04.291) 0:00:26.734 ******** 2026-01-05 01:07:28.920581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.920640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.920646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.920658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920745 | orchestrator | 2026-01-05 01:07:28.920749 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-05 01:07:28.920753 | orchestrator | Monday 05 January 2026 01:04:40 +0000 (0:00:03.875) 0:00:30.609 ******** 2026-01-05 01:07:28.920757 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:28.920761 | orchestrator | 2026-01-05 01:07:28.920764 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-05 01:07:28.920768 | orchestrator | Monday 05 January 2026 01:04:40 +0000 (0:00:00.130) 0:00:30.740 ******** 2026-01-05 01:07:28.920772 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:28.920776 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:28.920779 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:28.920783 | orchestrator | 2026-01-05 01:07:28.920787 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 01:07:28.920790 | orchestrator | Monday 05 January 2026 01:04:41 +0000 (0:00:00.316) 0:00:31.056 ******** 2026-01-05 01:07:28.920795 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:07:28.920799 | orchestrator | 2026-01-05 01:07:28.920804 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-05 01:07:28.920808 | orchestrator | Monday 05 January 2026 01:04:42 +0000 (0:00:01.037) 0:00:32.094 ******** 2026-01-05 01:07:28.920819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.920828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.920832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.920837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.920927 | orchestrator | 2026-01-05 01:07:28.920932 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-05 01:07:28.920936 | orchestrator | Monday 05 January 2026 01:04:48 +0000 (0:00:06.647) 0:00:38.742 ******** 2026-01-05 01:07:28.920948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.920958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 01:07:28.920963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.920968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.920973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.920977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.920987 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:28.920991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.921361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 01:07:28.921378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921403 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:28.921407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.921420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 01:07:28.921424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921447 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:28.921451 | orchestrator | 2026-01-05 01:07:28.921455 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-05 01:07:28.921459 | orchestrator | Monday 05 January 2026 01:04:51 +0000 (0:00:02.464) 0:00:41.206 ******** 2026-01-05 01:07:28.921463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.921471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 01:07:28.921476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921495 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:28.921499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.921508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 01:07:28.921512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921532 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:28.921536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.921546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 01:07:28.921551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.921580 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:28.921584 | orchestrator | 2026-01-05 01:07:28.921588 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-05 01:07:28.921592 | orchestrator | Monday 05 January 2026 01:04:53 +0000 (0:00:02.112) 0:00:43.319 ******** 2026-01-05 01:07:28.921596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.921607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.921611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.921618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.921677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922331 | orchestrator | 2026-01-05 01:07:28.922338 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-05 01:07:28.922344 | orchestrator | Monday 05 January 2026 01:05:00 +0000 (0:00:07.553) 0:00:50.872 ******** 2026-01-05 01:07:28.922351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.922383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.922398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.922411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922551 | orchestrator | 2026-01-05 01:07:28.922555 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-05 01:07:28.922559 | orchestrator | Monday 05 January 2026 01:05:28 +0000 (0:00:28.026) 0:01:18.899 ******** 2026-01-05 01:07:28.922563 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-05 01:07:28.922567 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-05 01:07:28.922571 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-05 01:07:28.922575 | orchestrator | 2026-01-05 01:07:28.922579 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-05 01:07:28.922582 | orchestrator | Monday 05 January 2026 01:05:34 +0000 (0:00:05.655) 0:01:24.555 ******** 2026-01-05 01:07:28.922586 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-05 01:07:28.922590 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-05 01:07:28.922593 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-05 01:07:28.922597 | orchestrator | 2026-01-05 01:07:28.922601 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-05 01:07:28.922605 | orchestrator | Monday 05 January 2026 01:05:38 +0000 (0:00:04.239) 0:01:28.795 ******** 2026-01-05 01:07:28.922612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.922625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.922629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.922633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922766 | orchestrator | 2026-01-05 01:07:28.922770 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-05 01:07:28.922774 | orchestrator | Monday 05 January 2026 01:05:42 +0000 (0:00:04.075) 0:01:32.870 ******** 2026-01-05 01:07:28.922781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.922802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.922807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.922811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.922995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.922999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923048 | orchestrator | 2026-01-05 01:07:28.923052 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 01:07:28.923056 | orchestrator | Monday 05 January 2026 01:05:45 +0000 (0:00:02.802) 0:01:35.672 ******** 2026-01-05 01:07:28.923059 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:28.923064 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:28.923068 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:28.923071 | orchestrator | 2026-01-05 01:07:28.923075 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-05 01:07:28.923079 | orchestrator | Monday 05 January 2026 01:05:46 +0000 (0:00:00.440) 0:01:36.113 ******** 2026-01-05 01:07:28.923086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.923123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 01:07:28.923128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.923151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 01:07:28.923162 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:28.923166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923185 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:28.923192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 01:07:28.923218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 01:07:28.923223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:07:28.923242 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:28.923246 | orchestrator | 2026-01-05 01:07:28.923249 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-05 01:07:28.923253 | orchestrator | Monday 05 January 2026 01:05:47 +0000 (0:00:01.515) 0:01:37.628 ******** 2026-01-05 01:07:28.923303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.923311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.923315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 01:07:28.923332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:07:28.923410 | orchestrator | 2026-01-05 01:07:28.923414 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 01:07:28.923418 | orchestrator | Monday 05 January 2026 01:05:53 +0000 (0:00:06.253) 0:01:43.882 ******** 2026-01-05 01:07:28.923422 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:28.923425 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:28.923429 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:28.923433 | orchestrator | 2026-01-05 01:07:28.923437 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-05 01:07:28.923441 | orchestrator | Monday 05 January 2026 01:05:54 +0000 (0:00:00.276) 0:01:44.159 ******** 2026-01-05 01:07:28.923445 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-05 01:07:28.923449 | orchestrator | 2026-01-05 01:07:28.923453 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-05 01:07:28.923456 | orchestrator | Monday 05 January 2026 01:05:56 +0000 (0:00:02.201) 0:01:46.360 ******** 2026-01-05 01:07:28.923460 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:07:28.923464 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-05 01:07:28.923468 | orchestrator | 2026-01-05 01:07:28.923472 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-05 01:07:28.923476 | orchestrator | Monday 05 January 2026 01:05:58 +0000 (0:00:02.258) 0:01:48.619 ******** 2026-01-05 01:07:28.923482 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:28.923486 | orchestrator | 2026-01-05 01:07:28.923490 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-05 01:07:28.923494 | orchestrator | Monday 05 January 2026 01:06:16 +0000 (0:00:17.492) 0:02:06.112 ******** 2026-01-05 01:07:28.923497 | orchestrator | 2026-01-05 01:07:28.923501 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-05 01:07:28.923505 | orchestrator | Monday 05 January 2026 01:06:16 +0000 (0:00:00.148) 0:02:06.260 ******** 2026-01-05 01:07:28.923509 | orchestrator | 2026-01-05 01:07:28.923513 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-05 01:07:28.923516 | orchestrator | Monday 05 January 2026 01:06:16 +0000 (0:00:00.141) 0:02:06.402 ******** 2026-01-05 01:07:28.923520 | orchestrator | 2026-01-05 01:07:28.923524 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-05 01:07:28.923530 | orchestrator | Monday 05 January 2026 01:06:16 +0000 (0:00:00.139) 0:02:06.542 ******** 2026-01-05 01:07:28.923534 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:28.923541 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:28.923545 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:28.923548 | orchestrator | 2026-01-05 01:07:28.923552 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-05 01:07:28.923556 | orchestrator | Monday 05 January 2026 01:06:29 +0000 (0:00:12.758) 0:02:19.300 ******** 2026-01-05 01:07:28.923560 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:28.923564 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:28.923567 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:28.923571 | orchestrator | 2026-01-05 01:07:28.923575 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-05 01:07:28.923579 | orchestrator | Monday 05 January 2026 01:06:43 +0000 (0:00:14.252) 0:02:33.552 ******** 2026-01-05 01:07:28.923583 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:28.923586 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:28.923590 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:28.923594 | orchestrator | 2026-01-05 01:07:28.923598 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-05 01:07:28.923601 | orchestrator | Monday 05 January 2026 01:06:49 +0000 (0:00:05.912) 0:02:39.465 ******** 2026-01-05 01:07:28.923605 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:28.923609 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:28.923613 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:28.923617 | orchestrator | 2026-01-05 01:07:28.923621 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-05 01:07:28.923626 | orchestrator | Monday 05 January 2026 01:07:01 +0000 (0:00:12.219) 0:02:51.684 ******** 2026-01-05 01:07:28.923630 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:28.923634 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:28.923639 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:28.923643 | orchestrator | 2026-01-05 01:07:28.923648 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-05 01:07:28.923653 | orchestrator | Monday 05 January 2026 01:07:08 +0000 (0:00:06.369) 0:02:58.053 ******** 2026-01-05 01:07:28.923657 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:28.923662 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:28.923666 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:28.923670 | orchestrator | 2026-01-05 01:07:28.923675 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-05 01:07:28.923679 | orchestrator | Monday 05 January 2026 01:07:19 +0000 (0:00:11.590) 0:03:09.644 ******** 2026-01-05 01:07:28.923684 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:28.923688 | orchestrator | 2026-01-05 01:07:28.923693 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:07:28.923697 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:07:28.923704 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 01:07:28.923709 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 01:07:28.923713 | orchestrator | 2026-01-05 01:07:28.923718 | orchestrator | 2026-01-05 01:07:28.923723 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:07:28.923727 | orchestrator | Monday 05 January 2026 01:07:26 +0000 (0:00:07.242) 0:03:16.887 ******** 2026-01-05 01:07:28.923732 | orchestrator | =============================================================================== 2026-01-05 01:07:28.923736 | orchestrator | designate : Copying over designate.conf -------------------------------- 28.03s 2026-01-05 01:07:28.923741 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.49s 2026-01-05 01:07:28.923745 | orchestrator | designate : Restart designate-api container ---------------------------- 14.25s 2026-01-05 01:07:28.923753 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.76s 2026-01-05 01:07:28.923757 | orchestrator | designate : Restart designate-producer container ----------------------- 12.22s 2026-01-05 01:07:28.923761 | orchestrator | designate : Restart designate-worker container ------------------------- 11.59s 2026-01-05 01:07:28.923766 | orchestrator | designate : Copying over config.json files for services ----------------- 7.55s 2026-01-05 01:07:28.923770 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.24s 2026-01-05 01:07:28.923775 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.65s 2026-01-05 01:07:28.923779 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.43s 2026-01-05 01:07:28.923784 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.37s 2026-01-05 01:07:28.923790 | orchestrator | designate : Check designate containers ---------------------------------- 6.25s 2026-01-05 01:07:28.923795 | orchestrator | designate : Restart designate-central container ------------------------- 5.91s 2026-01-05 01:07:28.923799 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.66s 2026-01-05 01:07:28.923803 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.29s 2026-01-05 01:07:28.923808 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.24s 2026-01-05 01:07:28.923812 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.08s 2026-01-05 01:07:28.923817 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.88s 2026-01-05 01:07:28.923822 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.86s 2026-01-05 01:07:28.923826 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.78s 2026-01-05 01:07:28.923845 | orchestrator | 2026-01-05 01:07:28 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:28.923900 | orchestrator | 2026-01-05 01:07:28 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:28.924590 | orchestrator | 2026-01-05 01:07:28 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:28.927021 | orchestrator | 2026-01-05 01:07:28 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:28.927067 | orchestrator | 2026-01-05 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:31.962890 | orchestrator | 2026-01-05 01:07:31 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:31.964993 | orchestrator | 2026-01-05 01:07:31 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:31.965845 | orchestrator | 2026-01-05 01:07:31 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:31.967298 | orchestrator | 2026-01-05 01:07:31 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:31.967357 | orchestrator | 2026-01-05 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:35.020502 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:35.021372 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:35.023162 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:35.024122 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:35.024165 | orchestrator | 2026-01-05 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:38.100132 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:38.100597 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:38.101517 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:38.102446 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:38.102482 | orchestrator | 2026-01-05 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:41.135363 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:41.135557 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:41.136321 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:41.136857 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:41.136938 | orchestrator | 2026-01-05 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:44.173664 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:44.175707 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:44.178682 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:44.181820 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:44.181948 | orchestrator | 2026-01-05 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:47.228555 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:47.231299 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:47.233580 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:47.237674 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:47.238117 | orchestrator | 2026-01-05 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:50.274472 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:50.278465 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:50.278665 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:50.280643 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:50.280676 | orchestrator | 2026-01-05 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:53.328778 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:53.331388 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:53.334224 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:53.337086 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:53.337593 | orchestrator | 2026-01-05 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:56.396234 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:56.398424 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:56.404184 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:56.406253 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:56.406352 | orchestrator | 2026-01-05 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:59.467286 | orchestrator | 2026-01-05 01:07:59 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:07:59.469222 | orchestrator | 2026-01-05 01:07:59 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:07:59.469253 | orchestrator | 2026-01-05 01:07:59 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:07:59.470996 | orchestrator | 2026-01-05 01:07:59 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:07:59.471071 | orchestrator | 2026-01-05 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:02.607052 | orchestrator | 2026-01-05 01:08:02 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:02.607409 | orchestrator | 2026-01-05 01:08:02 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:02.609808 | orchestrator | 2026-01-05 01:08:02 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:02.610353 | orchestrator | 2026-01-05 01:08:02 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:02.610535 | orchestrator | 2026-01-05 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:05.659233 | orchestrator | 2026-01-05 01:08:05 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:05.659734 | orchestrator | 2026-01-05 01:08:05 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:05.660837 | orchestrator | 2026-01-05 01:08:05 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:05.661796 | orchestrator | 2026-01-05 01:08:05 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:05.661834 | orchestrator | 2026-01-05 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:08.703884 | orchestrator | 2026-01-05 01:08:08 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:08.706717 | orchestrator | 2026-01-05 01:08:08 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:08.710220 | orchestrator | 2026-01-05 01:08:08 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:08.712627 | orchestrator | 2026-01-05 01:08:08 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:08.713762 | orchestrator | 2026-01-05 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:11.764644 | orchestrator | 2026-01-05 01:08:11 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:11.767036 | orchestrator | 2026-01-05 01:08:11 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:11.770625 | orchestrator | 2026-01-05 01:08:11 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:11.773717 | orchestrator | 2026-01-05 01:08:11 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:11.774292 | orchestrator | 2026-01-05 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:14.811621 | orchestrator | 2026-01-05 01:08:14 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:14.811730 | orchestrator | 2026-01-05 01:08:14 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:14.815624 | orchestrator | 2026-01-05 01:08:14 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:14.816320 | orchestrator | 2026-01-05 01:08:14 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:14.816353 | orchestrator | 2026-01-05 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:17.841915 | orchestrator | 2026-01-05 01:08:17 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:17.842491 | orchestrator | 2026-01-05 01:08:17 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:17.843419 | orchestrator | 2026-01-05 01:08:17 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:17.845818 | orchestrator | 2026-01-05 01:08:17 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:17.845854 | orchestrator | 2026-01-05 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:20.894379 | orchestrator | 2026-01-05 01:08:20 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:20.896152 | orchestrator | 2026-01-05 01:08:20 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:20.897929 | orchestrator | 2026-01-05 01:08:20 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:20.899568 | orchestrator | 2026-01-05 01:08:20 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:20.899881 | orchestrator | 2026-01-05 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:23.944727 | orchestrator | 2026-01-05 01:08:23 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:23.946993 | orchestrator | 2026-01-05 01:08:23 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:23.949285 | orchestrator | 2026-01-05 01:08:23 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:23.951206 | orchestrator | 2026-01-05 01:08:23 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:23.951646 | orchestrator | 2026-01-05 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:26.999284 | orchestrator | 2026-01-05 01:08:26 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:27.003547 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:27.022988 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:27.023953 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:27.023983 | orchestrator | 2026-01-05 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:30.066611 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:30.069403 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:30.072021 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:30.073833 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:30.073900 | orchestrator | 2026-01-05 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:33.112432 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:33.113447 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:33.114671 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:33.115949 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:33.116388 | orchestrator | 2026-01-05 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:36.163051 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:36.164975 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:36.167286 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:36.169181 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:36.169242 | orchestrator | 2026-01-05 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:39.218676 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:39.221575 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:39.224388 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:39.226111 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:39.226386 | orchestrator | 2026-01-05 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:42.281345 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:42.284585 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:42.287262 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state STARTED 2026-01-05 01:08:42.289742 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:42.289800 | orchestrator | 2026-01-05 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:45.330266 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:08:45.331904 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:45.331968 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:45.333300 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task 26e24275-7f99-4041-9ac5-a63be4fdabb5 is in state SUCCESS 2026-01-05 01:08:45.334727 | orchestrator | 2026-01-05 01:08:45.334777 | orchestrator | 2026-01-05 01:08:45.334787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:08:45.334796 | orchestrator | 2026-01-05 01:08:45.334802 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:08:45.334810 | orchestrator | Monday 05 January 2026 01:07:32 +0000 (0:00:00.294) 0:00:00.294 ******** 2026-01-05 01:08:45.334817 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:08:45.334825 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:08:45.334831 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:08:45.334835 | orchestrator | 2026-01-05 01:08:45.334839 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:08:45.334843 | orchestrator | Monday 05 January 2026 01:07:33 +0000 (0:00:00.824) 0:00:01.118 ******** 2026-01-05 01:08:45.334849 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-05 01:08:45.334856 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-05 01:08:45.334862 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-05 01:08:45.334869 | orchestrator | 2026-01-05 01:08:45.334875 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-05 01:08:45.334881 | orchestrator | 2026-01-05 01:08:45.334902 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-05 01:08:45.334909 | orchestrator | Monday 05 January 2026 01:07:34 +0000 (0:00:01.212) 0:00:02.331 ******** 2026-01-05 01:08:45.334915 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:08:45.334923 | orchestrator | 2026-01-05 01:08:45.334929 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-05 01:08:45.334936 | orchestrator | Monday 05 January 2026 01:07:35 +0000 (0:00:00.931) 0:00:03.262 ******** 2026-01-05 01:08:45.334942 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-05 01:08:45.334948 | orchestrator | 2026-01-05 01:08:45.334954 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-05 01:08:45.334960 | orchestrator | Monday 05 January 2026 01:07:39 +0000 (0:00:03.753) 0:00:07.016 ******** 2026-01-05 01:08:45.334968 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-05 01:08:45.334974 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-05 01:08:45.334981 | orchestrator | 2026-01-05 01:08:45.334987 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-05 01:08:45.334993 | orchestrator | Monday 05 January 2026 01:07:45 +0000 (0:00:06.504) 0:00:13.521 ******** 2026-01-05 01:08:45.334999 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:08:45.335006 | orchestrator | 2026-01-05 01:08:45.335012 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-05 01:08:45.335019 | orchestrator | Monday 05 January 2026 01:07:49 +0000 (0:00:03.319) 0:00:16.840 ******** 2026-01-05 01:08:45.335025 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:08:45.335031 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-05 01:08:45.335037 | orchestrator | 2026-01-05 01:08:45.335043 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-05 01:08:45.335050 | orchestrator | Monday 05 January 2026 01:07:53 +0000 (0:00:03.977) 0:00:20.818 ******** 2026-01-05 01:08:45.335056 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:08:45.335063 | orchestrator | 2026-01-05 01:08:45.335069 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-05 01:08:45.335136 | orchestrator | Monday 05 January 2026 01:07:56 +0000 (0:00:03.449) 0:00:24.268 ******** 2026-01-05 01:08:45.335147 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-05 01:08:45.335153 | orchestrator | 2026-01-05 01:08:45.335175 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-05 01:08:45.335213 | orchestrator | Monday 05 January 2026 01:08:00 +0000 (0:00:03.917) 0:00:28.185 ******** 2026-01-05 01:08:45.335221 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:08:45.335228 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:08:45.335234 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:08:45.335241 | orchestrator | 2026-01-05 01:08:45.335248 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-05 01:08:45.335255 | orchestrator | Monday 05 January 2026 01:08:01 +0000 (0:00:00.721) 0:00:28.907 ******** 2026-01-05 01:08:45.335266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335330 | orchestrator | 2026-01-05 01:08:45.335337 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-05 01:08:45.335344 | orchestrator | Monday 05 January 2026 01:08:02 +0000 (0:00:01.525) 0:00:30.433 ******** 2026-01-05 01:08:45.335350 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:08:45.335356 | orchestrator | 2026-01-05 01:08:45.335362 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-05 01:08:45.335369 | orchestrator | Monday 05 January 2026 01:08:03 +0000 (0:00:00.329) 0:00:30.763 ******** 2026-01-05 01:08:45.335375 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:08:45.335387 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:08:45.335394 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:08:45.335400 | orchestrator | 2026-01-05 01:08:45.335407 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-05 01:08:45.335414 | orchestrator | Monday 05 January 2026 01:08:03 +0000 (0:00:00.606) 0:00:31.370 ******** 2026-01-05 01:08:45.335421 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:08:45.335428 | orchestrator | 2026-01-05 01:08:45.335434 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-05 01:08:45.335441 | orchestrator | Monday 05 January 2026 01:08:04 +0000 (0:00:00.587) 0:00:31.957 ******** 2026-01-05 01:08:45.335448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335482 | orchestrator | 2026-01-05 01:08:45.335488 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-05 01:08:45.335495 | orchestrator | Monday 05 January 2026 01:08:06 +0000 (0:00:01.781) 0:00:33.738 ******** 2026-01-05 01:08:45.335502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:08:45.335513 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:08:45.335521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:08:45.335534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:08:45.335541 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:08:45.335548 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:08:45.335554 | orchestrator | 2026-01-05 01:08:45.335560 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-05 01:08:45.335567 | orchestrator | Monday 05 January 2026 01:08:07 +0000 (0:00:01.225) 0:00:34.964 ******** 2026-01-05 01:08:45.335574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:08:45.335581 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:08:45.335593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:08:45.335600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:08:45.335642 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:08:45.335649 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:08:45.335655 | orchestrator | 2026-01-05 01:08:45.335662 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-05 01:08:45.335668 | orchestrator | Monday 05 January 2026 01:08:08 +0000 (0:00:00.813) 0:00:35.777 ******** 2026-01-05 01:08:45.335682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335713 | orchestrator | 2026-01-05 01:08:45.335719 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-05 01:08:45.335725 | orchestrator | Monday 05 January 2026 01:08:09 +0000 (0:00:01.297) 0:00:37.075 ******** 2026-01-05 01:08:45.335732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335766 | orchestrator | 2026-01-05 01:08:45.335772 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-05 01:08:45.335778 | orchestrator | Monday 05 January 2026 01:08:12 +0000 (0:00:03.443) 0:00:40.519 ******** 2026-01-05 01:08:45.335785 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-05 01:08:45.335791 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-05 01:08:45.335798 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-05 01:08:45.335804 | orchestrator | 2026-01-05 01:08:45.335810 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-05 01:08:45.335816 | orchestrator | Monday 05 January 2026 01:08:15 +0000 (0:00:02.321) 0:00:42.841 ******** 2026-01-05 01:08:45.335822 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:08:45.335828 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:08:45.335834 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:08:45.335841 | orchestrator | 2026-01-05 01:08:45.335847 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-05 01:08:45.335853 | orchestrator | Monday 05 January 2026 01:08:16 +0000 (0:00:01.508) 0:00:44.349 ******** 2026-01-05 01:08:45.335860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:08:45.335866 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:08:45.335873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:08:45.335880 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:08:45.335892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:08:45.335903 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:08:45.335909 | orchestrator | 2026-01-05 01:08:45.335916 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-05 01:08:45.335922 | orchestrator | Monday 05 January 2026 01:08:17 +0000 (0:00:00.710) 0:00:45.060 ******** 2026-01-05 01:08:45.335936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:08:45.335957 | orchestrator | 2026-01-05 01:08:45.335963 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-05 01:08:45.335970 | orchestrator | Monday 05 January 2026 01:08:18 +0000 (0:00:01.486) 0:00:46.546 ******** 2026-01-05 01:08:45.335976 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:08:45.335982 | orchestrator | 2026-01-05 01:08:45.335989 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-05 01:08:45.335995 | orchestrator | Monday 05 January 2026 01:08:21 +0000 (0:00:02.546) 0:00:49.092 ******** 2026-01-05 01:08:45.336001 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:08:45.336007 | orchestrator | 2026-01-05 01:08:45.336019 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-05 01:08:45.336025 | orchestrator | Monday 05 January 2026 01:08:23 +0000 (0:00:02.208) 0:00:51.301 ******** 2026-01-05 01:08:45.336035 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:08:45.336042 | orchestrator | 2026-01-05 01:08:45.336048 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-05 01:08:45.336054 | orchestrator | Monday 05 January 2026 01:08:37 +0000 (0:00:14.205) 0:01:05.506 ******** 2026-01-05 01:08:45.336061 | orchestrator | 2026-01-05 01:08:45.336067 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-05 01:08:45.336073 | orchestrator | Monday 05 January 2026 01:08:37 +0000 (0:00:00.079) 0:01:05.585 ******** 2026-01-05 01:08:45.336095 | orchestrator | 2026-01-05 01:08:45.336101 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-05 01:08:45.336107 | orchestrator | Monday 05 January 2026 01:08:37 +0000 (0:00:00.071) 0:01:05.657 ******** 2026-01-05 01:08:45.336113 | orchestrator | 2026-01-05 01:08:45.336120 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-05 01:08:45.336126 | orchestrator | Monday 05 January 2026 01:08:38 +0000 (0:00:00.070) 0:01:05.728 ******** 2026-01-05 01:08:45.336132 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:08:45.336138 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:08:45.336144 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:08:45.336149 | orchestrator | 2026-01-05 01:08:45.336160 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:08:45.336167 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 01:08:45.336174 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:08:45.336180 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:08:45.336185 | orchestrator | 2026-01-05 01:08:45.336191 | orchestrator | 2026-01-05 01:08:45.336197 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:08:45.336204 | orchestrator | Monday 05 January 2026 01:08:43 +0000 (0:00:05.410) 0:01:11.138 ******** 2026-01-05 01:08:45.336213 | orchestrator | =============================================================================== 2026-01-05 01:08:45.336220 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.21s 2026-01-05 01:08:45.336225 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.50s 2026-01-05 01:08:45.336230 | orchestrator | placement : Restart placement-api container ----------------------------- 5.41s 2026-01-05 01:08:45.336236 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.98s 2026-01-05 01:08:45.336242 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.92s 2026-01-05 01:08:45.336248 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.75s 2026-01-05 01:08:45.336254 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.45s 2026-01-05 01:08:45.336259 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.44s 2026-01-05 01:08:45.336265 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.32s 2026-01-05 01:08:45.336271 | orchestrator | placement : Creating placement databases -------------------------------- 2.55s 2026-01-05 01:08:45.336276 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.32s 2026-01-05 01:08:45.336283 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.21s 2026-01-05 01:08:45.336289 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.78s 2026-01-05 01:08:45.336295 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.53s 2026-01-05 01:08:45.336308 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.51s 2026-01-05 01:08:45.336314 | orchestrator | placement : Check placement containers ---------------------------------- 1.49s 2026-01-05 01:08:45.336320 | orchestrator | placement : Copying over config.json files for services ----------------- 1.30s 2026-01-05 01:08:45.336325 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.23s 2026-01-05 01:08:45.336331 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.21s 2026-01-05 01:08:45.336337 | orchestrator | placement : include_tasks ----------------------------------------------- 0.93s 2026-01-05 01:08:45.336445 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:45.336455 | orchestrator | 2026-01-05 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:48.390864 | orchestrator | 2026-01-05 01:08:48 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:08:48.394510 | orchestrator | 2026-01-05 01:08:48 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:48.395649 | orchestrator | 2026-01-05 01:08:48 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:48.397617 | orchestrator | 2026-01-05 01:08:48 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:48.397685 | orchestrator | 2026-01-05 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:51.461971 | orchestrator | 2026-01-05 01:08:51 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:08:51.463541 | orchestrator | 2026-01-05 01:08:51 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:51.465974 | orchestrator | 2026-01-05 01:08:51 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:51.467841 | orchestrator | 2026-01-05 01:08:51 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:51.467895 | orchestrator | 2026-01-05 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:54.519509 | orchestrator | 2026-01-05 01:08:54 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:08:54.522113 | orchestrator | 2026-01-05 01:08:54 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:54.524180 | orchestrator | 2026-01-05 01:08:54 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:54.526287 | orchestrator | 2026-01-05 01:08:54 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:54.526395 | orchestrator | 2026-01-05 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:57.576028 | orchestrator | 2026-01-05 01:08:57 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:08:57.579312 | orchestrator | 2026-01-05 01:08:57 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:08:57.581783 | orchestrator | 2026-01-05 01:08:57 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:08:57.583756 | orchestrator | 2026-01-05 01:08:57 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:08:57.583943 | orchestrator | 2026-01-05 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:00.625797 | orchestrator | 2026-01-05 01:09:00 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:00.628223 | orchestrator | 2026-01-05 01:09:00 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:00.631311 | orchestrator | 2026-01-05 01:09:00 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:09:00.633996 | orchestrator | 2026-01-05 01:09:00 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:00.634120 | orchestrator | 2026-01-05 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:03.682579 | orchestrator | 2026-01-05 01:09:03 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:03.684826 | orchestrator | 2026-01-05 01:09:03 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:03.686686 | orchestrator | 2026-01-05 01:09:03 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state STARTED 2026-01-05 01:09:03.689369 | orchestrator | 2026-01-05 01:09:03 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:03.689412 | orchestrator | 2026-01-05 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:06.727145 | orchestrator | 2026-01-05 01:09:06 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:06.727203 | orchestrator | 2026-01-05 01:09:06 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:06.730395 | orchestrator | 2026-01-05 01:09:06 | INFO  | Task 9e922e82-ae46-43aa-b8bc-c8dd7665a5d2 is in state SUCCESS 2026-01-05 01:09:06.731771 | orchestrator | 2026-01-05 01:09:06.731815 | orchestrator | 2026-01-05 01:09:06.731827 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:09:06.731839 | orchestrator | 2026-01-05 01:09:06.731850 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:09:06.731858 | orchestrator | Monday 05 January 2026 01:03:54 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-01-05 01:09:06.731864 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:06.731870 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:06.731877 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:06.731882 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:09:06.731889 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:09:06.731895 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:09:06.731901 | orchestrator | 2026-01-05 01:09:06.731907 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:09:06.731914 | orchestrator | Monday 05 January 2026 01:03:54 +0000 (0:00:00.748) 0:00:01.014 ******** 2026-01-05 01:09:06.731921 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-05 01:09:06.731928 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-05 01:09:06.731935 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-05 01:09:06.731942 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-05 01:09:06.731949 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-05 01:09:06.731957 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-05 01:09:06.731963 | orchestrator | 2026-01-05 01:09:06.731970 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-05 01:09:06.731977 | orchestrator | 2026-01-05 01:09:06.731984 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:09:06.731991 | orchestrator | Monday 05 January 2026 01:03:55 +0000 (0:00:00.817) 0:00:01.832 ******** 2026-01-05 01:09:06.731998 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:09:06.732005 | orchestrator | 2026-01-05 01:09:06.732012 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-05 01:09:06.732019 | orchestrator | Monday 05 January 2026 01:03:56 +0000 (0:00:01.250) 0:00:03.083 ******** 2026-01-05 01:09:06.732081 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:06.732108 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:06.732115 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:06.732123 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:09:06.732130 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:09:06.732137 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:09:06.732190 | orchestrator | 2026-01-05 01:09:06.732207 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-05 01:09:06.732215 | orchestrator | Monday 05 January 2026 01:03:58 +0000 (0:00:01.513) 0:00:04.596 ******** 2026-01-05 01:09:06.732222 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:06.732230 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:06.732238 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:06.732245 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:09:06.732253 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:09:06.732260 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:09:06.732267 | orchestrator | 2026-01-05 01:09:06.732344 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-05 01:09:06.732353 | orchestrator | Monday 05 January 2026 01:04:00 +0000 (0:00:01.826) 0:00:06.423 ******** 2026-01-05 01:09:06.732360 | orchestrator | ok: [testbed-node-0] => { 2026-01-05 01:09:06.732367 | orchestrator |  "changed": false, 2026-01-05 01:09:06.732374 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:09:06.732382 | orchestrator | } 2026-01-05 01:09:06.732389 | orchestrator | ok: [testbed-node-1] => { 2026-01-05 01:09:06.732396 | orchestrator |  "changed": false, 2026-01-05 01:09:06.732403 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:09:06.732410 | orchestrator | } 2026-01-05 01:09:06.732417 | orchestrator | ok: [testbed-node-2] => { 2026-01-05 01:09:06.732424 | orchestrator |  "changed": false, 2026-01-05 01:09:06.732431 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:09:06.732438 | orchestrator | } 2026-01-05 01:09:06.732445 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:09:06.732452 | orchestrator |  "changed": false, 2026-01-05 01:09:06.732459 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:09:06.732466 | orchestrator | } 2026-01-05 01:09:06.732473 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:09:06.732480 | orchestrator |  "changed": false, 2026-01-05 01:09:06.732487 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:09:06.732494 | orchestrator | } 2026-01-05 01:09:06.732501 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:09:06.732508 | orchestrator |  "changed": false, 2026-01-05 01:09:06.732515 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:09:06.732522 | orchestrator | } 2026-01-05 01:09:06.732530 | orchestrator | 2026-01-05 01:09:06.732537 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-05 01:09:06.732544 | orchestrator | Monday 05 January 2026 01:04:01 +0000 (0:00:01.390) 0:00:07.814 ******** 2026-01-05 01:09:06.732551 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.732558 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.732565 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.732572 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.732579 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.732586 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.732593 | orchestrator | 2026-01-05 01:09:06.732600 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-05 01:09:06.732607 | orchestrator | Monday 05 January 2026 01:04:02 +0000 (0:00:00.651) 0:00:08.466 ******** 2026-01-05 01:09:06.732614 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-05 01:09:06.732622 | orchestrator | 2026-01-05 01:09:06.732629 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-05 01:09:06.732636 | orchestrator | Monday 05 January 2026 01:04:05 +0000 (0:00:03.505) 0:00:11.971 ******** 2026-01-05 01:09:06.732643 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-05 01:09:06.732651 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-05 01:09:06.732672 | orchestrator | 2026-01-05 01:09:06.732691 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-05 01:09:06.732716 | orchestrator | Monday 05 January 2026 01:04:12 +0000 (0:00:06.434) 0:00:18.405 ******** 2026-01-05 01:09:06.732725 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:09:06.732730 | orchestrator | 2026-01-05 01:09:06.732735 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-05 01:09:06.732739 | orchestrator | Monday 05 January 2026 01:04:15 +0000 (0:00:03.386) 0:00:21.792 ******** 2026-01-05 01:09:06.732743 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:09:06.732747 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-05 01:09:06.732751 | orchestrator | 2026-01-05 01:09:06.732755 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-05 01:09:06.732759 | orchestrator | Monday 05 January 2026 01:04:19 +0000 (0:00:03.966) 0:00:25.758 ******** 2026-01-05 01:09:06.732764 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:09:06.732768 | orchestrator | 2026-01-05 01:09:06.732772 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-05 01:09:06.732776 | orchestrator | Monday 05 January 2026 01:04:23 +0000 (0:00:03.574) 0:00:29.332 ******** 2026-01-05 01:09:06.732780 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-05 01:09:06.732784 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-05 01:09:06.732788 | orchestrator | 2026-01-05 01:09:06.732792 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:09:06.732796 | orchestrator | Monday 05 January 2026 01:04:30 +0000 (0:00:07.431) 0:00:36.764 ******** 2026-01-05 01:09:06.732800 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.732804 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.732809 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.732813 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.732817 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.732821 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.732825 | orchestrator | 2026-01-05 01:09:06.732829 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-05 01:09:06.732833 | orchestrator | Monday 05 January 2026 01:04:31 +0000 (0:00:00.843) 0:00:37.608 ******** 2026-01-05 01:09:06.732837 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.732841 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.732845 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.732849 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.732853 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.732858 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.732862 | orchestrator | 2026-01-05 01:09:06.732869 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-05 01:09:06.732874 | orchestrator | Monday 05 January 2026 01:04:33 +0000 (0:00:02.317) 0:00:39.925 ******** 2026-01-05 01:09:06.732878 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:06.732882 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:06.732897 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:06.732902 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:09:06.732906 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:09:06.732910 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:09:06.732914 | orchestrator | 2026-01-05 01:09:06.732918 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-05 01:09:06.732922 | orchestrator | Monday 05 January 2026 01:04:34 +0000 (0:00:01.221) 0:00:41.147 ******** 2026-01-05 01:09:06.732926 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.732931 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.732935 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.732939 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.732943 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.732951 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.732955 | orchestrator | 2026-01-05 01:09:06.732959 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-05 01:09:06.732963 | orchestrator | Monday 05 January 2026 01:04:38 +0000 (0:00:03.149) 0:00:44.297 ******** 2026-01-05 01:09:06.732970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.732981 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.732986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.732993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.732998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.733005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.733009 | orchestrator | 2026-01-05 01:09:06.733014 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-05 01:09:06.733021 | orchestrator | Monday 05 January 2026 01:04:41 +0000 (0:00:03.782) 0:00:48.079 ******** 2026-01-05 01:09:06.733029 | orchestrator | [WARNING]: Skipped 2026-01-05 01:09:06.733037 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-05 01:09:06.733045 | orchestrator | due to this access issue: 2026-01-05 01:09:06.733065 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-05 01:09:06.733072 | orchestrator | a directory 2026-01-05 01:09:06.733080 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:09:06.733086 | orchestrator | 2026-01-05 01:09:06.733097 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:09:06.733104 | orchestrator | Monday 05 January 2026 01:04:42 +0000 (0:00:01.055) 0:00:49.135 ******** 2026-01-05 01:09:06.733111 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:09:06.733117 | orchestrator | 2026-01-05 01:09:06.733124 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-05 01:09:06.733131 | orchestrator | Monday 05 January 2026 01:04:44 +0000 (0:00:01.319) 0:00:50.454 ******** 2026-01-05 01:09:06.733139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.733154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.733166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.733173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.733185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.733193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.733199 | orchestrator | 2026-01-05 01:09:06.733206 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-05 01:09:06.733217 | orchestrator | Monday 05 January 2026 01:04:48 +0000 (0:00:03.875) 0:00:54.330 ******** 2026-01-05 01:09:06.733228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.733235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.733243 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.733249 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.733260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.733267 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.733274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.733281 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.733291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.733303 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.733310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.733316 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.733323 | orchestrator | 2026-01-05 01:09:06.733330 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-05 01:09:06.733336 | orchestrator | Monday 05 January 2026 01:04:53 +0000 (0:00:04.991) 0:00:59.322 ******** 2026-01-05 01:09:06.733344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.733351 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.733362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.733370 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.733377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.733387 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.733397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.733404 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.733412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.733419 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.733426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.733433 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.733440 | orchestrator | 2026-01-05 01:09:06.733447 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-05 01:09:06.733457 | orchestrator | Monday 05 January 2026 01:04:56 +0000 (0:00:03.827) 0:01:03.149 ******** 2026-01-05 01:09:06.733463 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.733470 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.733476 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.733483 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.733491 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.733498 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.733503 | orchestrator | 2026-01-05 01:09:06.733509 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-05 01:09:06.733519 | orchestrator | Monday 05 January 2026 01:04:59 +0000 (0:00:02.925) 0:01:06.074 ******** 2026-01-05 01:09:06.733525 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.733530 | orchestrator | 2026-01-05 01:09:06.733536 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-05 01:09:06.733543 | orchestrator | Monday 05 January 2026 01:05:00 +0000 (0:00:00.357) 0:01:06.431 ******** 2026-01-05 01:09:06.733548 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.733553 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.733558 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.733564 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.733569 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.733575 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.733580 | orchestrator | 2026-01-05 01:09:06.733585 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-05 01:09:06.733591 | orchestrator | Monday 05 January 2026 01:05:02 +0000 (0:00:02.234) 0:01:08.666 ******** 2026-01-05 01:09:06.733599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.733605 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.733611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.733617 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.733622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.733628 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.733890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.733911 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.733918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.733925 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.733936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.733942 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.733948 | orchestrator | 2026-01-05 01:09:06.733955 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-05 01:09:06.733961 | orchestrator | Monday 05 January 2026 01:05:07 +0000 (0:00:04.737) 0:01:13.404 ******** 2026-01-05 01:09:06.733967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.733979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.733992 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.733999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.734009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.734080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.734091 | orchestrator | 2026-01-05 01:09:06.734097 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-05 01:09:06.734108 | orchestrator | Monday 05 January 2026 01:05:13 +0000 (0:00:06.155) 0:01:19.559 ******** 2026-01-05 01:09:06.734119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.734125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.734136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.734143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.734149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.734164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.734170 | orchestrator | 2026-01-05 01:09:06.734176 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-05 01:09:06.734182 | orchestrator | Monday 05 January 2026 01:05:21 +0000 (0:00:07.942) 0:01:27.502 ******** 2026-01-05 01:09:06.734188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.734195 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.734203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.734210 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.734216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.734226 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.734239 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.734256 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.734263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.734269 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734275 | orchestrator | 2026-01-05 01:09:06.734281 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-05 01:09:06.734287 | orchestrator | Monday 05 January 2026 01:05:25 +0000 (0:00:03.795) 0:01:31.298 ******** 2026-01-05 01:09:06.734293 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734299 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734308 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734314 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:06.734320 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:06.734326 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:06.734333 | orchestrator | 2026-01-05 01:09:06.734339 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-05 01:09:06.734345 | orchestrator | Monday 05 January 2026 01:05:28 +0000 (0:00:03.515) 0:01:34.814 ******** 2026-01-05 01:09:06.734352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.734362 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.734375 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.734392 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.734408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.734418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.734424 | orchestrator | 2026-01-05 01:09:06.734430 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-05 01:09:06.734436 | orchestrator | Monday 05 January 2026 01:05:33 +0000 (0:00:05.222) 0:01:40.036 ******** 2026-01-05 01:09:06.734442 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.734448 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.734454 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.734460 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734466 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734472 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734478 | orchestrator | 2026-01-05 01:09:06.734485 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-05 01:09:06.734491 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:02.664) 0:01:42.701 ******** 2026-01-05 01:09:06.734496 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.734502 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.734509 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.734515 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734522 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734528 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734535 | orchestrator | 2026-01-05 01:09:06.734542 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-05 01:09:06.734548 | orchestrator | Monday 05 January 2026 01:05:40 +0000 (0:00:04.369) 0:01:47.071 ******** 2026-01-05 01:09:06.734558 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.734565 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.734571 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734578 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.734584 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734591 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734598 | orchestrator | 2026-01-05 01:09:06.734605 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-05 01:09:06.734611 | orchestrator | Monday 05 January 2026 01:05:43 +0000 (0:00:02.915) 0:01:49.986 ******** 2026-01-05 01:09:06.734618 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.734625 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.734636 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.734644 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734651 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734657 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734663 | orchestrator | 2026-01-05 01:09:06.734670 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-05 01:09:06.734677 | orchestrator | Monday 05 January 2026 01:05:45 +0000 (0:00:02.118) 0:01:52.104 ******** 2026-01-05 01:09:06.734685 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.734693 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.734701 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.734713 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734720 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734727 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734733 | orchestrator | 2026-01-05 01:09:06.734739 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-05 01:09:06.734745 | orchestrator | Monday 05 January 2026 01:05:50 +0000 (0:00:04.391) 0:01:56.495 ******** 2026-01-05 01:09:06.734751 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.734757 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.734763 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.734769 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734775 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734781 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734787 | orchestrator | 2026-01-05 01:09:06.734793 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-05 01:09:06.734800 | orchestrator | Monday 05 January 2026 01:05:53 +0000 (0:00:03.540) 0:02:00.036 ******** 2026-01-05 01:09:06.734806 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:09:06.734812 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.734818 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:09:06.734827 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.734833 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:09:06.734839 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.734845 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:09:06.734851 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734857 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:09:06.734863 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734869 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:09:06.734875 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734881 | orchestrator | 2026-01-05 01:09:06.734887 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-05 01:09:06.734893 | orchestrator | Monday 05 January 2026 01:05:55 +0000 (0:00:01.926) 0:02:01.962 ******** 2026-01-05 01:09:06.734900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.734907 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.734918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.734928 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.734935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.734941 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.734949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.734956 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.734962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.734968 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.734974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.734980 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.734986 | orchestrator | 2026-01-05 01:09:06.734992 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-05 01:09:06.735003 | orchestrator | Monday 05 January 2026 01:05:57 +0000 (0:00:02.104) 0:02:04.067 ******** 2026-01-05 01:09:06.735013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.735020 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.735033 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.735077 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.735091 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.735113 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.735127 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735133 | orchestrator | 2026-01-05 01:09:06.735141 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-05 01:09:06.735147 | orchestrator | Monday 05 January 2026 01:06:00 +0000 (0:00:02.552) 0:02:06.620 ******** 2026-01-05 01:09:06.735154 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735161 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735168 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735175 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735181 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735188 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735194 | orchestrator | 2026-01-05 01:09:06.735200 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-05 01:09:06.735207 | orchestrator | Monday 05 January 2026 01:06:04 +0000 (0:00:03.757) 0:02:10.377 ******** 2026-01-05 01:09:06.735213 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735220 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735226 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735233 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:09:06.735240 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:09:06.735246 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:09:06.735252 | orchestrator | 2026-01-05 01:09:06.735258 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-05 01:09:06.735269 | orchestrator | Monday 05 January 2026 01:06:08 +0000 (0:00:04.020) 0:02:14.398 ******** 2026-01-05 01:09:06.735275 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735281 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735287 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735293 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735299 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735304 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735311 | orchestrator | 2026-01-05 01:09:06.735317 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-05 01:09:06.735322 | orchestrator | Monday 05 January 2026 01:06:10 +0000 (0:00:02.434) 0:02:16.833 ******** 2026-01-05 01:09:06.735328 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735334 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735340 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735345 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735355 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735361 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735367 | orchestrator | 2026-01-05 01:09:06.735372 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-05 01:09:06.735378 | orchestrator | Monday 05 January 2026 01:06:13 +0000 (0:00:02.933) 0:02:19.767 ******** 2026-01-05 01:09:06.735384 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735390 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735396 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735402 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735407 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735413 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735419 | orchestrator | 2026-01-05 01:09:06.735425 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-05 01:09:06.735431 | orchestrator | Monday 05 January 2026 01:06:17 +0000 (0:00:04.104) 0:02:23.871 ******** 2026-01-05 01:09:06.735437 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735444 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735450 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735457 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735464 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735471 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735478 | orchestrator | 2026-01-05 01:09:06.735485 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-05 01:09:06.735491 | orchestrator | Monday 05 January 2026 01:06:22 +0000 (0:00:05.309) 0:02:29.181 ******** 2026-01-05 01:09:06.735498 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735505 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735512 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735517 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735523 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735530 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735537 | orchestrator | 2026-01-05 01:09:06.735544 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-05 01:09:06.735551 | orchestrator | Monday 05 January 2026 01:06:27 +0000 (0:00:04.243) 0:02:33.425 ******** 2026-01-05 01:09:06.735558 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735565 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735572 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735579 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735586 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735593 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735600 | orchestrator | 2026-01-05 01:09:06.735607 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-05 01:09:06.735620 | orchestrator | Monday 05 January 2026 01:06:32 +0000 (0:00:05.101) 0:02:38.526 ******** 2026-01-05 01:09:06.735627 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735635 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735642 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735650 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735659 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735667 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735676 | orchestrator | 2026-01-05 01:09:06.735684 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-05 01:09:06.735693 | orchestrator | Monday 05 January 2026 01:06:35 +0000 (0:00:03.167) 0:02:41.694 ******** 2026-01-05 01:09:06.735701 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:09:06.735711 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735717 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:09:06.735725 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735737 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:09:06.735745 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735752 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:09:06.735760 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735767 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:09:06.735774 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735781 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:09:06.735788 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735802 | orchestrator | 2026-01-05 01:09:06.735808 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-05 01:09:06.735815 | orchestrator | Monday 05 January 2026 01:06:37 +0000 (0:00:02.462) 0:02:44.156 ******** 2026-01-05 01:09:06.735826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.735833 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.735840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.735847 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.735859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:09:06.735866 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.735873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.735883 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.735894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.735901 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.735908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:09:06.735915 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.735921 | orchestrator | 2026-01-05 01:09:06.735928 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-05 01:09:06.735935 | orchestrator | Monday 05 January 2026 01:06:40 +0000 (0:00:02.234) 0:02:46.390 ******** 2026-01-05 01:09:06.735942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.735953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.735965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:09:06.735975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.735982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.735989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:09:06.735996 | orchestrator | 2026-01-05 01:09:06.736002 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:09:06.736009 | orchestrator | Monday 05 January 2026 01:06:43 +0000 (0:00:02.886) 0:02:49.277 ******** 2026-01-05 01:09:06.736019 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:06.736026 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:06.736032 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:06.736038 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:06.736044 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:06.736090 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:06.736097 | orchestrator | 2026-01-05 01:09:06.736103 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-05 01:09:06.736109 | orchestrator | Monday 05 January 2026 01:06:43 +0000 (0:00:00.807) 0:02:50.085 ******** 2026-01-05 01:09:06.736115 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:06.736121 | orchestrator | 2026-01-05 01:09:06.736127 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-05 01:09:06.736133 | orchestrator | Monday 05 January 2026 01:06:46 +0000 (0:00:02.201) 0:02:52.286 ******** 2026-01-05 01:09:06.736140 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:06.736146 | orchestrator | 2026-01-05 01:09:06.736152 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-05 01:09:06.736158 | orchestrator | Monday 05 January 2026 01:06:48 +0000 (0:00:02.043) 0:02:54.330 ******** 2026-01-05 01:09:06.736164 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:06.736170 | orchestrator | 2026-01-05 01:09:06.736176 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:09:06.736182 | orchestrator | Monday 05 January 2026 01:07:31 +0000 (0:00:43.659) 0:03:37.989 ******** 2026-01-05 01:09:06.736188 | orchestrator | 2026-01-05 01:09:06.736194 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:09:06.736200 | orchestrator | Monday 05 January 2026 01:07:31 +0000 (0:00:00.097) 0:03:38.086 ******** 2026-01-05 01:09:06.736206 | orchestrator | 2026-01-05 01:09:06.736212 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:09:06.736219 | orchestrator | Monday 05 January 2026 01:07:32 +0000 (0:00:00.411) 0:03:38.498 ******** 2026-01-05 01:09:06.736224 | orchestrator | 2026-01-05 01:09:06.736230 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:09:06.736237 | orchestrator | Monday 05 January 2026 01:07:32 +0000 (0:00:00.077) 0:03:38.576 ******** 2026-01-05 01:09:06.736243 | orchestrator | 2026-01-05 01:09:06.736249 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:09:06.736255 | orchestrator | Monday 05 January 2026 01:07:32 +0000 (0:00:00.118) 0:03:38.694 ******** 2026-01-05 01:09:06.736261 | orchestrator | 2026-01-05 01:09:06.736267 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:09:06.736273 | orchestrator | Monday 05 January 2026 01:07:32 +0000 (0:00:00.111) 0:03:38.805 ******** 2026-01-05 01:09:06.736279 | orchestrator | 2026-01-05 01:09:06.736285 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-05 01:09:06.736292 | orchestrator | Monday 05 January 2026 01:07:32 +0000 (0:00:00.105) 0:03:38.910 ******** 2026-01-05 01:09:06.736302 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:06.736308 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:06.736315 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:06.736323 | orchestrator | 2026-01-05 01:09:06.736329 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-05 01:09:06.736335 | orchestrator | Monday 05 January 2026 01:08:00 +0000 (0:00:27.432) 0:04:06.343 ******** 2026-01-05 01:09:06.736341 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:09:06.736347 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:09:06.736354 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:09:06.736361 | orchestrator | 2026-01-05 01:09:06.736368 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:09:06.736375 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:09:06.736388 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-05 01:09:06.736395 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-05 01:09:06.736401 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:09:06.736408 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:09:06.736414 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:09:06.736420 | orchestrator | 2026-01-05 01:09:06.736426 | orchestrator | 2026-01-05 01:09:06.736432 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:09:06.736438 | orchestrator | Monday 05 January 2026 01:09:06 +0000 (0:01:05.925) 0:05:12.268 ******** 2026-01-05 01:09:06.736444 | orchestrator | =============================================================================== 2026-01-05 01:09:06.736450 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 65.93s 2026-01-05 01:09:06.736457 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.66s 2026-01-05 01:09:06.736462 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.43s 2026-01-05 01:09:06.736468 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.94s 2026-01-05 01:09:06.736474 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.43s 2026-01-05 01:09:06.736480 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.43s 2026-01-05 01:09:06.736486 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.16s 2026-01-05 01:09:06.736492 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 5.31s 2026-01-05 01:09:06.736502 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.22s 2026-01-05 01:09:06.736509 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 5.10s 2026-01-05 01:09:06.736515 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.99s 2026-01-05 01:09:06.736521 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.74s 2026-01-05 01:09:06.736527 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.39s 2026-01-05 01:09:06.736534 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 4.37s 2026-01-05 01:09:06.736540 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 4.24s 2026-01-05 01:09:06.736545 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 4.10s 2026-01-05 01:09:06.736551 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.02s 2026-01-05 01:09:06.736557 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.97s 2026-01-05 01:09:06.736563 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.88s 2026-01-05 01:09:06.736569 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.83s 2026-01-05 01:09:06.736575 | orchestrator | 2026-01-05 01:09:06 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:06.736581 | orchestrator | 2026-01-05 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:09.761803 | orchestrator | 2026-01-05 01:09:09 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:09.764664 | orchestrator | 2026-01-05 01:09:09 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:09.765762 | orchestrator | 2026-01-05 01:09:09 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:09.765789 | orchestrator | 2026-01-05 01:09:09 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:09.766171 | orchestrator | 2026-01-05 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:12.804407 | orchestrator | 2026-01-05 01:09:12 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:12.805711 | orchestrator | 2026-01-05 01:09:12 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:12.806561 | orchestrator | 2026-01-05 01:09:12 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:12.808431 | orchestrator | 2026-01-05 01:09:12 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:12.808485 | orchestrator | 2026-01-05 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:15.867170 | orchestrator | 2026-01-05 01:09:15 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:15.870236 | orchestrator | 2026-01-05 01:09:15 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:15.873464 | orchestrator | 2026-01-05 01:09:15 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:15.876194 | orchestrator | 2026-01-05 01:09:15 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:15.876751 | orchestrator | 2026-01-05 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:18.915757 | orchestrator | 2026-01-05 01:09:18 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:18.918392 | orchestrator | 2026-01-05 01:09:18 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:18.922966 | orchestrator | 2026-01-05 01:09:18 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:18.924814 | orchestrator | 2026-01-05 01:09:18 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:18.925118 | orchestrator | 2026-01-05 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:21.961422 | orchestrator | 2026-01-05 01:09:21 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:21.965213 | orchestrator | 2026-01-05 01:09:21 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:21.967434 | orchestrator | 2026-01-05 01:09:21 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:21.969368 | orchestrator | 2026-01-05 01:09:21 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:21.969425 | orchestrator | 2026-01-05 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:25.023592 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:25.024797 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:25.027245 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:25.027887 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:25.027937 | orchestrator | 2026-01-05 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:28.064367 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:28.064939 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:28.066492 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:28.068411 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:28.068470 | orchestrator | 2026-01-05 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:31.113313 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:31.116761 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:31.119430 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:31.120794 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:31.121191 | orchestrator | 2026-01-05 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:34.169757 | orchestrator | 2026-01-05 01:09:34 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:34.170974 | orchestrator | 2026-01-05 01:09:34 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:34.172515 | orchestrator | 2026-01-05 01:09:34 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:34.174121 | orchestrator | 2026-01-05 01:09:34 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:34.174183 | orchestrator | 2026-01-05 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:37.212352 | orchestrator | 2026-01-05 01:09:37 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:37.213225 | orchestrator | 2026-01-05 01:09:37 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:37.215471 | orchestrator | 2026-01-05 01:09:37 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:37.218936 | orchestrator | 2026-01-05 01:09:37 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:37.218972 | orchestrator | 2026-01-05 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:40.270185 | orchestrator | 2026-01-05 01:09:40 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:40.271081 | orchestrator | 2026-01-05 01:09:40 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:40.272254 | orchestrator | 2026-01-05 01:09:40 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:40.272901 | orchestrator | 2026-01-05 01:09:40 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:40.273790 | orchestrator | 2026-01-05 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:43.332397 | orchestrator | 2026-01-05 01:09:43 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:43.335268 | orchestrator | 2026-01-05 01:09:43 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:43.336175 | orchestrator | 2026-01-05 01:09:43 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:43.338855 | orchestrator | 2026-01-05 01:09:43 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:43.338979 | orchestrator | 2026-01-05 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:46.379477 | orchestrator | 2026-01-05 01:09:46 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:46.381896 | orchestrator | 2026-01-05 01:09:46 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:46.384074 | orchestrator | 2026-01-05 01:09:46 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:46.386383 | orchestrator | 2026-01-05 01:09:46 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:46.386425 | orchestrator | 2026-01-05 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:49.429247 | orchestrator | 2026-01-05 01:09:49 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:49.432709 | orchestrator | 2026-01-05 01:09:49 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:49.435043 | orchestrator | 2026-01-05 01:09:49 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:49.437540 | orchestrator | 2026-01-05 01:09:49 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:49.437587 | orchestrator | 2026-01-05 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:52.530890 | orchestrator | 2026-01-05 01:09:52 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:52.530995 | orchestrator | 2026-01-05 01:09:52 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:52.531081 | orchestrator | 2026-01-05 01:09:52 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:52.531099 | orchestrator | 2026-01-05 01:09:52 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:52.531116 | orchestrator | 2026-01-05 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:55.576487 | orchestrator | 2026-01-05 01:09:55 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:55.577957 | orchestrator | 2026-01-05 01:09:55 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:55.579197 | orchestrator | 2026-01-05 01:09:55 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:55.580288 | orchestrator | 2026-01-05 01:09:55 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:55.580471 | orchestrator | 2026-01-05 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:58.629882 | orchestrator | 2026-01-05 01:09:58 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:09:58.638312 | orchestrator | 2026-01-05 01:09:58 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:09:58.641430 | orchestrator | 2026-01-05 01:09:58 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:09:58.644791 | orchestrator | 2026-01-05 01:09:58 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:09:58.644879 | orchestrator | 2026-01-05 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:01.683161 | orchestrator | 2026-01-05 01:10:01 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:01.685519 | orchestrator | 2026-01-05 01:10:01 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:01.688251 | orchestrator | 2026-01-05 01:10:01 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:01.688777 | orchestrator | 2026-01-05 01:10:01 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:01.688834 | orchestrator | 2026-01-05 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:04.736918 | orchestrator | 2026-01-05 01:10:04 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:04.738689 | orchestrator | 2026-01-05 01:10:04 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:04.740480 | orchestrator | 2026-01-05 01:10:04 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:04.742392 | orchestrator | 2026-01-05 01:10:04 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:04.742437 | orchestrator | 2026-01-05 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:07.789757 | orchestrator | 2026-01-05 01:10:07 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:07.791447 | orchestrator | 2026-01-05 01:10:07 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:07.793377 | orchestrator | 2026-01-05 01:10:07 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:07.795352 | orchestrator | 2026-01-05 01:10:07 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:07.795488 | orchestrator | 2026-01-05 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:10.847159 | orchestrator | 2026-01-05 01:10:10 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:10.848200 | orchestrator | 2026-01-05 01:10:10 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:10.851725 | orchestrator | 2026-01-05 01:10:10 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:10.852511 | orchestrator | 2026-01-05 01:10:10 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:10.852702 | orchestrator | 2026-01-05 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:13.899676 | orchestrator | 2026-01-05 01:10:13 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:13.901941 | orchestrator | 2026-01-05 01:10:13 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:13.903959 | orchestrator | 2026-01-05 01:10:13 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:13.906598 | orchestrator | 2026-01-05 01:10:13 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:13.906655 | orchestrator | 2026-01-05 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:16.948258 | orchestrator | 2026-01-05 01:10:16 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:16.949863 | orchestrator | 2026-01-05 01:10:16 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:16.952175 | orchestrator | 2026-01-05 01:10:16 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:16.953891 | orchestrator | 2026-01-05 01:10:16 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:16.954097 | orchestrator | 2026-01-05 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:20.004526 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:20.005233 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:20.008134 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:20.011836 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:20.012020 | orchestrator | 2026-01-05 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:23.063591 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:23.063931 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:23.066452 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:23.066917 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:23.067009 | orchestrator | 2026-01-05 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:26.104605 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:26.106511 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:26.108717 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:26.112156 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:26.112264 | orchestrator | 2026-01-05 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:29.163085 | orchestrator | 2026-01-05 01:10:29 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:29.165521 | orchestrator | 2026-01-05 01:10:29 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:29.168645 | orchestrator | 2026-01-05 01:10:29 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:29.171309 | orchestrator | 2026-01-05 01:10:29 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:29.171367 | orchestrator | 2026-01-05 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:32.232681 | orchestrator | 2026-01-05 01:10:32 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:32.234184 | orchestrator | 2026-01-05 01:10:32 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:32.234824 | orchestrator | 2026-01-05 01:10:32 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:32.236126 | orchestrator | 2026-01-05 01:10:32 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:32.236171 | orchestrator | 2026-01-05 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:35.285889 | orchestrator | 2026-01-05 01:10:35 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:35.287560 | orchestrator | 2026-01-05 01:10:35 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:35.289011 | orchestrator | 2026-01-05 01:10:35 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:35.290679 | orchestrator | 2026-01-05 01:10:35 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:35.290748 | orchestrator | 2026-01-05 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:38.342768 | orchestrator | 2026-01-05 01:10:38 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:38.346845 | orchestrator | 2026-01-05 01:10:38 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:38.349014 | orchestrator | 2026-01-05 01:10:38 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:38.350315 | orchestrator | 2026-01-05 01:10:38 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:38.350376 | orchestrator | 2026-01-05 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:41.402081 | orchestrator | 2026-01-05 01:10:41 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:41.404463 | orchestrator | 2026-01-05 01:10:41 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:41.408001 | orchestrator | 2026-01-05 01:10:41 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:41.409930 | orchestrator | 2026-01-05 01:10:41 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:41.409998 | orchestrator | 2026-01-05 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:44.465473 | orchestrator | 2026-01-05 01:10:44 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state STARTED 2026-01-05 01:10:44.468169 | orchestrator | 2026-01-05 01:10:44 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:44.470832 | orchestrator | 2026-01-05 01:10:44 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:44.472893 | orchestrator | 2026-01-05 01:10:44 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:44.472966 | orchestrator | 2026-01-05 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:47.511862 | orchestrator | 2026-01-05 01:10:47 | INFO  | Task e04f0df4-f530-4c14-933f-d7c4d7671899 is in state SUCCESS 2026-01-05 01:10:47.513083 | orchestrator | 2026-01-05 01:10:47.513133 | orchestrator | 2026-01-05 01:10:47.513141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:10:47.513160 | orchestrator | 2026-01-05 01:10:47.513175 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:10:47.513256 | orchestrator | Monday 05 January 2026 01:08:48 +0000 (0:00:00.318) 0:00:00.318 ******** 2026-01-05 01:10:47.513268 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:47.513278 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:47.513287 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:47.513295 | orchestrator | 2026-01-05 01:10:47.513305 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:10:47.513313 | orchestrator | Monday 05 January 2026 01:08:48 +0000 (0:00:00.315) 0:00:00.634 ******** 2026-01-05 01:10:47.513321 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-05 01:10:47.513330 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-05 01:10:47.513338 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-05 01:10:47.513345 | orchestrator | 2026-01-05 01:10:47.513353 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-05 01:10:47.513360 | orchestrator | 2026-01-05 01:10:47.513367 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-05 01:10:47.513375 | orchestrator | Monday 05 January 2026 01:08:49 +0000 (0:00:00.440) 0:00:01.074 ******** 2026-01-05 01:10:47.513383 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:10:47.513415 | orchestrator | 2026-01-05 01:10:47.513425 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-05 01:10:47.513433 | orchestrator | Monday 05 January 2026 01:08:49 +0000 (0:00:00.558) 0:00:01.633 ******** 2026-01-05 01:10:47.513442 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-05 01:10:47.513450 | orchestrator | 2026-01-05 01:10:47.513458 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-05 01:10:47.513466 | orchestrator | Monday 05 January 2026 01:08:53 +0000 (0:00:03.554) 0:00:05.187 ******** 2026-01-05 01:10:47.513474 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-05 01:10:47.513481 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-05 01:10:47.513487 | orchestrator | 2026-01-05 01:10:47.513492 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-05 01:10:47.513497 | orchestrator | Monday 05 January 2026 01:08:59 +0000 (0:00:06.532) 0:00:11.720 ******** 2026-01-05 01:10:47.513503 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:10:47.513510 | orchestrator | 2026-01-05 01:10:47.513515 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-05 01:10:47.513521 | orchestrator | Monday 05 January 2026 01:09:03 +0000 (0:00:03.428) 0:00:15.148 ******** 2026-01-05 01:10:47.513526 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:10:47.513532 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-05 01:10:47.513538 | orchestrator | 2026-01-05 01:10:47.513543 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-05 01:10:47.513549 | orchestrator | Monday 05 January 2026 01:09:07 +0000 (0:00:03.883) 0:00:19.032 ******** 2026-01-05 01:10:47.513554 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:10:47.513559 | orchestrator | 2026-01-05 01:10:47.513565 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-05 01:10:47.513570 | orchestrator | Monday 05 January 2026 01:09:10 +0000 (0:00:03.466) 0:00:22.498 ******** 2026-01-05 01:10:47.513586 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-05 01:10:47.513591 | orchestrator | 2026-01-05 01:10:47.513597 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-05 01:10:47.513602 | orchestrator | Monday 05 January 2026 01:09:14 +0000 (0:00:03.542) 0:00:26.041 ******** 2026-01-05 01:10:47.513608 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:47.513613 | orchestrator | 2026-01-05 01:10:47.513618 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-05 01:10:47.513623 | orchestrator | Monday 05 January 2026 01:09:17 +0000 (0:00:03.285) 0:00:29.327 ******** 2026-01-05 01:10:47.513628 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:47.513634 | orchestrator | 2026-01-05 01:10:47.513641 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-05 01:10:47.513649 | orchestrator | Monday 05 January 2026 01:09:21 +0000 (0:00:03.801) 0:00:33.129 ******** 2026-01-05 01:10:47.513656 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:47.513663 | orchestrator | 2026-01-05 01:10:47.513671 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-05 01:10:47.513679 | orchestrator | Monday 05 January 2026 01:09:24 +0000 (0:00:03.418) 0:00:36.547 ******** 2026-01-05 01:10:47.513707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.513728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.513737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.513751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.513758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.513768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.513778 | orchestrator | 2026-01-05 01:10:47.513782 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-05 01:10:47.513787 | orchestrator | Monday 05 January 2026 01:09:25 +0000 (0:00:01.368) 0:00:37.916 ******** 2026-01-05 01:10:47.513792 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:10:47.513796 | orchestrator | 2026-01-05 01:10:47.513801 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-05 01:10:47.513805 | orchestrator | Monday 05 January 2026 01:09:26 +0000 (0:00:00.131) 0:00:38.047 ******** 2026-01-05 01:10:47.513810 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:10:47.513814 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:10:47.513819 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:10:47.513823 | orchestrator | 2026-01-05 01:10:47.513828 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-05 01:10:47.513832 | orchestrator | Monday 05 January 2026 01:09:26 +0000 (0:00:00.512) 0:00:38.560 ******** 2026-01-05 01:10:47.513837 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:10:47.513841 | orchestrator | 2026-01-05 01:10:47.513846 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-05 01:10:47.513850 | orchestrator | Monday 05 January 2026 01:09:27 +0000 (0:00:00.981) 0:00:39.542 ******** 2026-01-05 01:10:47.513855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.513863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.513868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.513882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.513887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.513891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.513896 | orchestrator | 2026-01-05 01:10:47.513901 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-05 01:10:47.513905 | orchestrator | Monday 05 January 2026 01:09:30 +0000 (0:00:02.409) 0:00:41.952 ******** 2026-01-05 01:10:47.513910 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:47.513914 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:47.513919 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:47.513923 | orchestrator | 2026-01-05 01:10:47.513951 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-05 01:10:47.513956 | orchestrator | Monday 05 January 2026 01:09:30 +0000 (0:00:00.330) 0:00:42.283 ******** 2026-01-05 01:10:47.513961 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:10:47.513965 | orchestrator | 2026-01-05 01:10:47.513973 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-05 01:10:47.513977 | orchestrator | Monday 05 January 2026 01:09:31 +0000 (0:00:00.759) 0:00:43.042 ******** 2026-01-05 01:10:47.513982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.513997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514067 | orchestrator | 2026-01-05 01:10:47.514072 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-05 01:10:47.514076 | orchestrator | Monday 05 January 2026 01:09:33 +0000 (0:00:02.516) 0:00:45.558 ******** 2026-01-05 01:10:47.514085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 01:10:47.514091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:10:47.514095 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:10:47.514102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 01:10:47.514150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:10:47.514172 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:10:47.514180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 01:10:47.514194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:10:47.514202 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:10:47.514209 | orchestrator | 2026-01-05 01:10:47.514217 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-05 01:10:47.514224 | orchestrator | Monday 05 January 2026 01:09:34 +0000 (0:00:00.654) 0:00:46.212 ******** 2026-01-05 01:10:47.514231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 01:10:47.514238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:10:47.514249 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:10:47.514257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 01:10:47.514266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:10:47.514270 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:10:47.514275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 01:10:47.514280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:10:47.514285 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:10:47.514298 | orchestrator | 2026-01-05 01:10:47.514305 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-05 01:10:47.514313 | orchestrator | Monday 05 January 2026 01:09:35 +0000 (0:00:01.432) 0:00:47.644 ******** 2026-01-05 01:10:47.514324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514418 | orchestrator | 2026-01-05 01:10:47.514426 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-05 01:10:47.514433 | orchestrator | Monday 05 January 2026 01:09:38 +0000 (0:00:02.464) 0:00:50.108 ******** 2026-01-05 01:10:47.514441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514507 | orchestrator | 2026-01-05 01:10:47.514516 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-05 01:10:47.514528 | orchestrator | Monday 05 January 2026 01:09:43 +0000 (0:00:05.110) 0:00:55.219 ******** 2026-01-05 01:10:47.514536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 01:10:47.514542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:10:47.514550 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:10:47.514557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 01:10:47.514562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:10:47.514567 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:10:47.514578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 01:10:47.514586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:10:47.514599 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:10:47.514610 | orchestrator | 2026-01-05 01:10:47.514640 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-05 01:10:47.514647 | orchestrator | Monday 05 January 2026 01:09:43 +0000 (0:00:00.681) 0:00:55.900 ******** 2026-01-05 01:10:47.514655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 01:10:47.514688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:10:47.514716 | orchestrator | 2026-01-05 01:10:47.514725 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-05 01:10:47.514732 | orchestrator | Monday 05 January 2026 01:09:46 +0000 (0:00:02.467) 0:00:58.368 ******** 2026-01-05 01:10:47.514739 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:10:47.514747 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:10:47.514755 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:10:47.514762 | orchestrator | 2026-01-05 01:10:47.514769 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-05 01:10:47.514784 | orchestrator | Monday 05 January 2026 01:09:46 +0000 (0:00:00.321) 0:00:58.690 ******** 2026-01-05 01:10:47.514792 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:47.514801 | orchestrator | 2026-01-05 01:10:47.514808 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-05 01:10:47.514815 | orchestrator | Monday 05 January 2026 01:09:48 +0000 (0:00:02.189) 0:01:00.880 ******** 2026-01-05 01:10:47.514823 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:47.514831 | orchestrator | 2026-01-05 01:10:47.514838 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-05 01:10:47.514846 | orchestrator | Monday 05 January 2026 01:09:51 +0000 (0:00:02.224) 0:01:03.104 ******** 2026-01-05 01:10:47.514854 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:47.514861 | orchestrator | 2026-01-05 01:10:47.514869 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-05 01:10:47.514876 | orchestrator | Monday 05 January 2026 01:10:07 +0000 (0:00:16.649) 0:01:19.754 ******** 2026-01-05 01:10:47.514884 | orchestrator | 2026-01-05 01:10:47.514891 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-05 01:10:47.514899 | orchestrator | Monday 05 January 2026 01:10:07 +0000 (0:00:00.066) 0:01:19.821 ******** 2026-01-05 01:10:47.514906 | orchestrator | 2026-01-05 01:10:47.514914 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-05 01:10:47.514922 | orchestrator | Monday 05 January 2026 01:10:07 +0000 (0:00:00.063) 0:01:19.884 ******** 2026-01-05 01:10:47.515002 | orchestrator | 2026-01-05 01:10:47.515012 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-05 01:10:47.515019 | orchestrator | Monday 05 January 2026 01:10:08 +0000 (0:00:00.073) 0:01:19.958 ******** 2026-01-05 01:10:47.515028 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:47.515035 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:10:47.515043 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:10:47.515051 | orchestrator | 2026-01-05 01:10:47.515059 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-05 01:10:47.515073 | orchestrator | Monday 05 January 2026 01:10:29 +0000 (0:00:21.041) 0:01:40.999 ******** 2026-01-05 01:10:47.515081 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:47.515088 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:10:47.515096 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:10:47.515103 | orchestrator | 2026-01-05 01:10:47.515143 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:10:47.515153 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 01:10:47.515163 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:10:47.515171 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:10:47.515178 | orchestrator | 2026-01-05 01:10:47.515186 | orchestrator | 2026-01-05 01:10:47.515193 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:10:47.515201 | orchestrator | Monday 05 January 2026 01:10:45 +0000 (0:00:16.412) 0:01:57.412 ******** 2026-01-05 01:10:47.515208 | orchestrator | =============================================================================== 2026-01-05 01:10:47.515216 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.04s 2026-01-05 01:10:47.515224 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.65s 2026-01-05 01:10:47.515232 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.41s 2026-01-05 01:10:47.515239 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.53s 2026-01-05 01:10:47.515247 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.11s 2026-01-05 01:10:47.515254 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.88s 2026-01-05 01:10:47.515262 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.80s 2026-01-05 01:10:47.515270 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.55s 2026-01-05 01:10:47.515277 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.54s 2026-01-05 01:10:47.515285 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.47s 2026-01-05 01:10:47.515291 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.43s 2026-01-05 01:10:47.515299 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.42s 2026-01-05 01:10:47.515306 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.29s 2026-01-05 01:10:47.515314 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.52s 2026-01-05 01:10:47.515321 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.47s 2026-01-05 01:10:47.515329 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.46s 2026-01-05 01:10:47.515337 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.41s 2026-01-05 01:10:47.515345 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.22s 2026-01-05 01:10:47.515352 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.19s 2026-01-05 01:10:47.515360 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.43s 2026-01-05 01:10:47.515368 | orchestrator | 2026-01-05 01:10:47 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:47.515381 | orchestrator | 2026-01-05 01:10:47 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:47.515463 | orchestrator | 2026-01-05 01:10:47 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:47.516234 | orchestrator | 2026-01-05 01:10:47 | INFO  | Task 025719a1-7dc6-4589-b2da-18e543bab796 is in state STARTED 2026-01-05 01:10:47.516425 | orchestrator | 2026-01-05 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:50.549407 | orchestrator | 2026-01-05 01:10:50 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:50.549526 | orchestrator | 2026-01-05 01:10:50 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:50.550455 | orchestrator | 2026-01-05 01:10:50 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:50.551821 | orchestrator | 2026-01-05 01:10:50 | INFO  | Task 025719a1-7dc6-4589-b2da-18e543bab796 is in state STARTED 2026-01-05 01:10:50.551880 | orchestrator | 2026-01-05 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:53.578383 | orchestrator | 2026-01-05 01:10:53 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:53.578485 | orchestrator | 2026-01-05 01:10:53 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:53.578900 | orchestrator | 2026-01-05 01:10:53 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:53.579538 | orchestrator | 2026-01-05 01:10:53 | INFO  | Task 025719a1-7dc6-4589-b2da-18e543bab796 is in state STARTED 2026-01-05 01:10:53.579573 | orchestrator | 2026-01-05 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:56.602623 | orchestrator | 2026-01-05 01:10:56 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:56.603152 | orchestrator | 2026-01-05 01:10:56 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:56.603900 | orchestrator | 2026-01-05 01:10:56 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:56.604362 | orchestrator | 2026-01-05 01:10:56 | INFO  | Task 025719a1-7dc6-4589-b2da-18e543bab796 is in state SUCCESS 2026-01-05 01:10:56.604542 | orchestrator | 2026-01-05 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:59.641991 | orchestrator | 2026-01-05 01:10:59 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:10:59.644482 | orchestrator | 2026-01-05 01:10:59 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:10:59.647560 | orchestrator | 2026-01-05 01:10:59 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:10:59.650901 | orchestrator | 2026-01-05 01:10:59 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:10:59.650996 | orchestrator | 2026-01-05 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:02.696833 | orchestrator | 2026-01-05 01:11:02 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:11:02.698739 | orchestrator | 2026-01-05 01:11:02 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:02.701087 | orchestrator | 2026-01-05 01:11:02 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:02.703355 | orchestrator | 2026-01-05 01:11:02 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:02.703574 | orchestrator | 2026-01-05 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:05.763877 | orchestrator | 2026-01-05 01:11:05 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:11:05.765000 | orchestrator | 2026-01-05 01:11:05 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:05.769039 | orchestrator | 2026-01-05 01:11:05 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:05.769866 | orchestrator | 2026-01-05 01:11:05 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:05.769992 | orchestrator | 2026-01-05 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:08.817334 | orchestrator | 2026-01-05 01:11:08 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:11:08.818813 | orchestrator | 2026-01-05 01:11:08 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:08.820257 | orchestrator | 2026-01-05 01:11:08 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:08.822580 | orchestrator | 2026-01-05 01:11:08 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:08.822620 | orchestrator | 2026-01-05 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:11.873459 | orchestrator | 2026-01-05 01:11:11 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:11:11.875359 | orchestrator | 2026-01-05 01:11:11 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:11.877587 | orchestrator | 2026-01-05 01:11:11 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:11.881081 | orchestrator | 2026-01-05 01:11:11 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:11.881128 | orchestrator | 2026-01-05 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:14.932759 | orchestrator | 2026-01-05 01:11:14 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:11:14.935629 | orchestrator | 2026-01-05 01:11:14 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:14.938101 | orchestrator | 2026-01-05 01:11:14 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:14.942795 | orchestrator | 2026-01-05 01:11:14 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:14.942864 | orchestrator | 2026-01-05 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:17.996945 | orchestrator | 2026-01-05 01:11:17 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state STARTED 2026-01-05 01:11:17.999000 | orchestrator | 2026-01-05 01:11:17 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:18.004708 | orchestrator | 2026-01-05 01:11:18 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:18.006802 | orchestrator | 2026-01-05 01:11:18 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:18.006843 | orchestrator | 2026-01-05 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:21.053114 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task db11ef73-72cb-4af0-9168-6bda21bba740 is in state SUCCESS 2026-01-05 01:11:21.056373 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:21.059421 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:21.061429 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:21.061532 | orchestrator | 2026-01-05 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:24.098446 | orchestrator | 2026-01-05 01:11:24 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:24.100377 | orchestrator | 2026-01-05 01:11:24 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:24.102767 | orchestrator | 2026-01-05 01:11:24 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:24.103029 | orchestrator | 2026-01-05 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:27.152408 | orchestrator | 2026-01-05 01:11:27 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:27.154678 | orchestrator | 2026-01-05 01:11:27 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:27.158105 | orchestrator | 2026-01-05 01:11:27 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:27.158174 | orchestrator | 2026-01-05 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:30.209649 | orchestrator | 2026-01-05 01:11:30 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:30.210141 | orchestrator | 2026-01-05 01:11:30 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:30.213004 | orchestrator | 2026-01-05 01:11:30 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:30.213336 | orchestrator | 2026-01-05 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:33.260337 | orchestrator | 2026-01-05 01:11:33 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:33.262920 | orchestrator | 2026-01-05 01:11:33 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state STARTED 2026-01-05 01:11:33.263752 | orchestrator | 2026-01-05 01:11:33 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:33.263959 | orchestrator | 2026-01-05 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:36.313917 | orchestrator | 2026-01-05 01:11:36 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:36.319198 | orchestrator | 2026-01-05 01:11:36 | INFO  | Task 1dc69824-0b31-4024-8662-d5d49d21d2ac is in state SUCCESS 2026-01-05 01:11:36.321213 | orchestrator | 2026-01-05 01:11:36.321266 | orchestrator | 2026-01-05 01:11:36.321276 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:11:36.321283 | orchestrator | 2026-01-05 01:11:36.321290 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:11:36.321297 | orchestrator | Monday 05 January 2026 01:10:53 +0000 (0:00:00.177) 0:00:00.177 ******** 2026-01-05 01:11:36.321314 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.321325 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:36.321329 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:36.321333 | orchestrator | 2026-01-05 01:11:36.321337 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:11:36.321341 | orchestrator | Monday 05 January 2026 01:10:54 +0000 (0:00:00.468) 0:00:00.645 ******** 2026-01-05 01:11:36.321346 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-05 01:11:36.321351 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-05 01:11:36.321355 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-05 01:11:36.321358 | orchestrator | 2026-01-05 01:11:36.321362 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-05 01:11:36.321366 | orchestrator | 2026-01-05 01:11:36.321371 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-05 01:11:36.321375 | orchestrator | Monday 05 January 2026 01:10:55 +0000 (0:00:00.846) 0:00:01.492 ******** 2026-01-05 01:11:36.321398 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.321402 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:36.321406 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:36.321410 | orchestrator | 2026-01-05 01:11:36.321414 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:11:36.321430 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:36.321437 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:36.321441 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:36.321445 | orchestrator | 2026-01-05 01:11:36.321449 | orchestrator | 2026-01-05 01:11:36.321453 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:11:36.321456 | orchestrator | Monday 05 January 2026 01:10:55 +0000 (0:00:00.621) 0:00:02.113 ******** 2026-01-05 01:11:36.321460 | orchestrator | =============================================================================== 2026-01-05 01:11:36.321464 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2026-01-05 01:11:36.321468 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.62s 2026-01-05 01:11:36.321472 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2026-01-05 01:11:36.321475 | orchestrator | 2026-01-05 01:11:36.321479 | orchestrator | 2026-01-05 01:11:36.321483 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-05 01:11:36.321486 | orchestrator | 2026-01-05 01:11:36.321490 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-05 01:11:36.321494 | orchestrator | Monday 05 January 2026 01:06:25 +0000 (0:00:00.324) 0:00:00.324 ******** 2026-01-05 01:11:36.321497 | orchestrator | changed: [localhost] 2026-01-05 01:11:36.321546 | orchestrator | 2026-01-05 01:11:36.321551 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-05 01:11:36.321555 | orchestrator | Monday 05 January 2026 01:06:26 +0000 (0:00:01.643) 0:00:01.968 ******** 2026-01-05 01:11:36.321559 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-05 01:11:36.321563 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-01-05 01:11:36.321567 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-01-05 01:11:36.321570 | orchestrator | 2026-01-05 01:11:36.321575 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:36.321579 | orchestrator | 2026-01-05 01:11:36.321583 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:36.321586 | orchestrator | 2026-01-05 01:11:36.321590 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:36.321623 | orchestrator | 2026-01-05 01:11:36.321627 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:36.321631 | orchestrator | 2026-01-05 01:11:36.321635 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:36.321638 | orchestrator | changed: [localhost] 2026-01-05 01:11:36.321642 | orchestrator | 2026-01-05 01:11:36.321646 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-05 01:11:36.321650 | orchestrator | Monday 05 January 2026 01:11:06 +0000 (0:04:39.489) 0:04:41.458 ******** 2026-01-05 01:11:36.321653 | orchestrator | changed: [localhost] 2026-01-05 01:11:36.321657 | orchestrator | 2026-01-05 01:11:36.321661 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:11:36.321665 | orchestrator | 2026-01-05 01:11:36.321679 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:11:36.321683 | orchestrator | Monday 05 January 2026 01:11:19 +0000 (0:00:13.048) 0:04:54.506 ******** 2026-01-05 01:11:36.321691 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.321695 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:36.321699 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:36.321703 | orchestrator | 2026-01-05 01:11:36.321706 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:11:36.321710 | orchestrator | Monday 05 January 2026 01:11:19 +0000 (0:00:00.329) 0:04:54.836 ******** 2026-01-05 01:11:36.321714 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-05 01:11:36.321718 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-05 01:11:36.321721 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-05 01:11:36.321736 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-05 01:11:36.321740 | orchestrator | 2026-01-05 01:11:36.321744 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-05 01:11:36.321747 | orchestrator | skipping: no hosts matched 2026-01-05 01:11:36.321752 | orchestrator | 2026-01-05 01:11:36.321756 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:11:36.321760 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:36.321766 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:36.321770 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:36.321773 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:36.321777 | orchestrator | 2026-01-05 01:11:36.321781 | orchestrator | 2026-01-05 01:11:36.321785 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:11:36.321789 | orchestrator | Monday 05 January 2026 01:11:20 +0000 (0:00:00.639) 0:04:55.476 ******** 2026-01-05 01:11:36.321793 | orchestrator | =============================================================================== 2026-01-05 01:11:36.321796 | orchestrator | Download ironic-agent initramfs --------------------------------------- 279.49s 2026-01-05 01:11:36.321800 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.05s 2026-01-05 01:11:36.321804 | orchestrator | Ensure the destination directory exists --------------------------------- 1.64s 2026-01-05 01:11:36.321807 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-01-05 01:11:36.321811 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-01-05 01:11:36.321815 | orchestrator | 2026-01-05 01:11:36.321819 | orchestrator | 2026-01-05 01:11:36.321822 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:11:36.321826 | orchestrator | 2026-01-05 01:11:36.321830 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-05 01:11:36.321834 | orchestrator | Monday 05 January 2026 01:01:30 +0000 (0:00:00.273) 0:00:00.273 ******** 2026-01-05 01:11:36.321837 | orchestrator | changed: [testbed-manager] 2026-01-05 01:11:36.321841 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.321845 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:36.321849 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:36.321852 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.321856 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.321904 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.321932 | orchestrator | 2026-01-05 01:11:36.321936 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:11:36.321939 | orchestrator | Monday 05 January 2026 01:01:31 +0000 (0:00:01.222) 0:00:01.496 ******** 2026-01-05 01:11:36.321943 | orchestrator | changed: [testbed-manager] 2026-01-05 01:11:36.321947 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.321955 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:36.321959 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:36.321962 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.321966 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.321970 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.321974 | orchestrator | 2026-01-05 01:11:36.321977 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:11:36.321981 | orchestrator | Monday 05 January 2026 01:01:32 +0000 (0:00:01.228) 0:00:02.725 ******** 2026-01-05 01:11:36.321985 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-05 01:11:36.321989 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-05 01:11:36.321992 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-05 01:11:36.321996 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-05 01:11:36.322000 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-05 01:11:36.322003 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-05 01:11:36.322007 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-05 01:11:36.322011 | orchestrator | 2026-01-05 01:11:36.322048 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-05 01:11:36.322052 | orchestrator | 2026-01-05 01:11:36.322055 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-05 01:11:36.322059 | orchestrator | Monday 05 January 2026 01:01:34 +0000 (0:00:01.680) 0:00:04.406 ******** 2026-01-05 01:11:36.322063 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:36.322068 | orchestrator | 2026-01-05 01:11:36.322072 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-05 01:11:36.322080 | orchestrator | Monday 05 January 2026 01:01:35 +0000 (0:00:01.094) 0:00:05.500 ******** 2026-01-05 01:11:36.322083 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-05 01:11:36.322087 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-05 01:11:36.322091 | orchestrator | 2026-01-05 01:11:36.322095 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-05 01:11:36.322099 | orchestrator | Monday 05 January 2026 01:01:39 +0000 (0:00:04.077) 0:00:09.578 ******** 2026-01-05 01:11:36.322103 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:11:36.322107 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:11:36.322110 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322114 | orchestrator | 2026-01-05 01:11:36.322118 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-05 01:11:36.322122 | orchestrator | Monday 05 January 2026 01:01:43 +0000 (0:00:04.240) 0:00:13.819 ******** 2026-01-05 01:11:36.322130 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322134 | orchestrator | 2026-01-05 01:11:36.322138 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-05 01:11:36.322141 | orchestrator | Monday 05 January 2026 01:01:45 +0000 (0:00:01.300) 0:00:15.119 ******** 2026-01-05 01:11:36.322145 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322149 | orchestrator | 2026-01-05 01:11:36.322153 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-05 01:11:36.322156 | orchestrator | Monday 05 January 2026 01:01:47 +0000 (0:00:01.785) 0:00:16.905 ******** 2026-01-05 01:11:36.322160 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322164 | orchestrator | 2026-01-05 01:11:36.322167 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-05 01:11:36.322171 | orchestrator | Monday 05 January 2026 01:01:49 +0000 (0:00:02.897) 0:00:19.802 ******** 2026-01-05 01:11:36.322175 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322179 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322183 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322186 | orchestrator | 2026-01-05 01:11:36.322190 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-05 01:11:36.322198 | orchestrator | Monday 05 January 2026 01:01:50 +0000 (0:00:00.499) 0:00:20.302 ******** 2026-01-05 01:11:36.322201 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.322205 | orchestrator | 2026-01-05 01:11:36.322209 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-05 01:11:36.322213 | orchestrator | Monday 05 January 2026 01:02:17 +0000 (0:00:27.494) 0:00:47.796 ******** 2026-01-05 01:11:36.322216 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322220 | orchestrator | 2026-01-05 01:11:36.322224 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-05 01:11:36.322228 | orchestrator | Monday 05 January 2026 01:02:32 +0000 (0:00:14.316) 0:01:02.113 ******** 2026-01-05 01:11:36.322231 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.322235 | orchestrator | 2026-01-05 01:11:36.322239 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-05 01:11:36.322243 | orchestrator | Monday 05 January 2026 01:02:45 +0000 (0:00:13.090) 0:01:15.204 ******** 2026-01-05 01:11:36.322246 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.322250 | orchestrator | 2026-01-05 01:11:36.322254 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-05 01:11:36.322258 | orchestrator | Monday 05 January 2026 01:02:46 +0000 (0:00:01.128) 0:01:16.332 ******** 2026-01-05 01:11:36.322261 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322265 | orchestrator | 2026-01-05 01:11:36.322269 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-05 01:11:36.322273 | orchestrator | Monday 05 January 2026 01:02:46 +0000 (0:00:00.445) 0:01:16.777 ******** 2026-01-05 01:11:36.322276 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:36.322280 | orchestrator | 2026-01-05 01:11:36.322284 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-05 01:11:36.322288 | orchestrator | Monday 05 January 2026 01:02:47 +0000 (0:00:00.515) 0:01:17.293 ******** 2026-01-05 01:11:36.322291 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.322295 | orchestrator | 2026-01-05 01:11:36.322299 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-05 01:11:36.322303 | orchestrator | Monday 05 January 2026 01:03:05 +0000 (0:00:18.406) 0:01:35.699 ******** 2026-01-05 01:11:36.322306 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322310 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322314 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322318 | orchestrator | 2026-01-05 01:11:36.322321 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-05 01:11:36.322325 | orchestrator | 2026-01-05 01:11:36.322329 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-05 01:11:36.322333 | orchestrator | Monday 05 January 2026 01:03:06 +0000 (0:00:00.332) 0:01:36.032 ******** 2026-01-05 01:11:36.322336 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:36.322340 | orchestrator | 2026-01-05 01:11:36.322344 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-05 01:11:36.322348 | orchestrator | Monday 05 January 2026 01:03:06 +0000 (0:00:00.647) 0:01:36.679 ******** 2026-01-05 01:11:36.322351 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322355 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322359 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322362 | orchestrator | 2026-01-05 01:11:36.322366 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-05 01:11:36.322370 | orchestrator | Monday 05 January 2026 01:03:08 +0000 (0:00:02.163) 0:01:38.842 ******** 2026-01-05 01:11:36.322374 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322377 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322381 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322385 | orchestrator | 2026-01-05 01:11:36.322393 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-05 01:11:36.322397 | orchestrator | Monday 05 January 2026 01:03:11 +0000 (0:00:02.299) 0:01:41.142 ******** 2026-01-05 01:11:36.322404 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322408 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322411 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322415 | orchestrator | 2026-01-05 01:11:36.322419 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-05 01:11:36.322423 | orchestrator | Monday 05 January 2026 01:03:11 +0000 (0:00:00.473) 0:01:41.615 ******** 2026-01-05 01:11:36.322427 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 01:11:36.322430 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322434 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 01:11:36.322438 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322442 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-05 01:11:36.322446 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-05 01:11:36.322449 | orchestrator | 2026-01-05 01:11:36.322456 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-05 01:11:36.322460 | orchestrator | Monday 05 January 2026 01:03:20 +0000 (0:00:09.138) 0:01:50.754 ******** 2026-01-05 01:11:36.322463 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322467 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322471 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322475 | orchestrator | 2026-01-05 01:11:36.322478 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-05 01:11:36.322482 | orchestrator | Monday 05 January 2026 01:03:22 +0000 (0:00:01.249) 0:01:52.004 ******** 2026-01-05 01:11:36.322486 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 01:11:36.322490 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322493 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 01:11:36.322497 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322501 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 01:11:36.322505 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322509 | orchestrator | 2026-01-05 01:11:36.322512 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-05 01:11:36.322516 | orchestrator | Monday 05 January 2026 01:03:23 +0000 (0:00:01.791) 0:01:53.795 ******** 2026-01-05 01:11:36.322520 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322524 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322527 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322531 | orchestrator | 2026-01-05 01:11:36.322535 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-05 01:11:36.322538 | orchestrator | Monday 05 January 2026 01:03:24 +0000 (0:00:00.777) 0:01:54.573 ******** 2026-01-05 01:11:36.322542 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322546 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322550 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322553 | orchestrator | 2026-01-05 01:11:36.322557 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-05 01:11:36.322561 | orchestrator | Monday 05 January 2026 01:03:26 +0000 (0:00:01.295) 0:01:55.868 ******** 2026-01-05 01:11:36.322565 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322568 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322572 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322584 | orchestrator | 2026-01-05 01:11:36.322588 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-05 01:11:36.322592 | orchestrator | Monday 05 January 2026 01:03:28 +0000 (0:00:02.469) 0:01:58.337 ******** 2026-01-05 01:11:36.322596 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322600 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322604 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.322611 | orchestrator | 2026-01-05 01:11:36.322615 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-05 01:11:36.322618 | orchestrator | Monday 05 January 2026 01:03:51 +0000 (0:00:22.743) 0:02:21.081 ******** 2026-01-05 01:11:36.322622 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322631 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322635 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.322639 | orchestrator | 2026-01-05 01:11:36.322643 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-05 01:11:36.322646 | orchestrator | Monday 05 January 2026 01:04:04 +0000 (0:00:13.327) 0:02:34.409 ******** 2026-01-05 01:11:36.322650 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.322654 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322658 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322661 | orchestrator | 2026-01-05 01:11:36.322665 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-05 01:11:36.322669 | orchestrator | Monday 05 January 2026 01:04:05 +0000 (0:00:01.193) 0:02:35.602 ******** 2026-01-05 01:11:36.322673 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322676 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322680 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.322684 | orchestrator | 2026-01-05 01:11:36.322688 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-05 01:11:36.322691 | orchestrator | Monday 05 January 2026 01:04:18 +0000 (0:00:12.614) 0:02:48.217 ******** 2026-01-05 01:11:36.322695 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322699 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322703 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322706 | orchestrator | 2026-01-05 01:11:36.322710 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-05 01:11:36.322714 | orchestrator | Monday 05 January 2026 01:04:19 +0000 (0:00:01.062) 0:02:49.279 ******** 2026-01-05 01:11:36.322718 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322721 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322725 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322729 | orchestrator | 2026-01-05 01:11:36.322733 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-05 01:11:36.322736 | orchestrator | 2026-01-05 01:11:36.322740 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-05 01:11:36.322744 | orchestrator | Monday 05 January 2026 01:04:20 +0000 (0:00:00.583) 0:02:49.863 ******** 2026-01-05 01:11:36.322751 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:36.322755 | orchestrator | 2026-01-05 01:11:36.322759 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-05 01:11:36.322763 | orchestrator | Monday 05 January 2026 01:04:20 +0000 (0:00:00.598) 0:02:50.461 ******** 2026-01-05 01:11:36.322767 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-05 01:11:36.322770 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-05 01:11:36.322774 | orchestrator | 2026-01-05 01:11:36.322778 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-05 01:11:36.322781 | orchestrator | Monday 05 January 2026 01:04:23 +0000 (0:00:03.354) 0:02:53.815 ******** 2026-01-05 01:11:36.322788 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-05 01:11:36.322792 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-05 01:11:36.322796 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-05 01:11:36.322800 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-05 01:11:36.322804 | orchestrator | 2026-01-05 01:11:36.322811 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-05 01:11:36.322815 | orchestrator | Monday 05 January 2026 01:04:30 +0000 (0:00:06.443) 0:03:00.259 ******** 2026-01-05 01:11:36.322819 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:11:36.322823 | orchestrator | 2026-01-05 01:11:36.322826 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-05 01:11:36.322830 | orchestrator | Monday 05 January 2026 01:04:33 +0000 (0:00:03.163) 0:03:03.422 ******** 2026-01-05 01:11:36.322834 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:11:36.322838 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-05 01:11:36.322842 | orchestrator | 2026-01-05 01:11:36.322845 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-05 01:11:36.322849 | orchestrator | Monday 05 January 2026 01:04:38 +0000 (0:00:04.702) 0:03:08.125 ******** 2026-01-05 01:11:36.322853 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:11:36.322856 | orchestrator | 2026-01-05 01:11:36.322875 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-05 01:11:36.322880 | orchestrator | Monday 05 January 2026 01:04:41 +0000 (0:00:03.165) 0:03:11.290 ******** 2026-01-05 01:11:36.322884 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-05 01:11:36.322887 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-05 01:11:36.322891 | orchestrator | 2026-01-05 01:11:36.322895 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-05 01:11:36.322899 | orchestrator | Monday 05 January 2026 01:04:49 +0000 (0:00:08.140) 0:03:19.430 ******** 2026-01-05 01:11:36.322907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.322917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.322937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.322944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.322948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.322952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.322956 | orchestrator | 2026-01-05 01:11:36.322960 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-05 01:11:36.322964 | orchestrator | Monday 05 January 2026 01:04:52 +0000 (0:00:02.936) 0:03:22.367 ******** 2026-01-05 01:11:36.322968 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322972 | orchestrator | 2026-01-05 01:11:36.322976 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-05 01:11:36.322980 | orchestrator | Monday 05 January 2026 01:04:52 +0000 (0:00:00.258) 0:03:22.626 ******** 2026-01-05 01:11:36.322984 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.322988 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.322991 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.322999 | orchestrator | 2026-01-05 01:11:36.323006 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-05 01:11:36.323010 | orchestrator | Monday 05 January 2026 01:04:53 +0000 (0:00:00.327) 0:03:22.953 ******** 2026-01-05 01:11:36.323014 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:11:36.323018 | orchestrator | 2026-01-05 01:11:36.323022 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-05 01:11:36.323025 | orchestrator | Monday 05 January 2026 01:04:55 +0000 (0:00:02.504) 0:03:25.458 ******** 2026-01-05 01:11:36.323029 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.323033 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.323037 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.323040 | orchestrator | 2026-01-05 01:11:36.323044 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-05 01:11:36.323051 | orchestrator | Monday 05 January 2026 01:04:56 +0000 (0:00:00.478) 0:03:25.936 ******** 2026-01-05 01:11:36.323055 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:36.323059 | orchestrator | 2026-01-05 01:11:36.323062 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-05 01:11:36.323066 | orchestrator | Monday 05 January 2026 01:04:56 +0000 (0:00:00.490) 0:03:26.427 ******** 2026-01-05 01:11:36.323070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323105 | orchestrator | 2026-01-05 01:11:36.323108 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-05 01:11:36.323112 | orchestrator | Monday 05 January 2026 01:04:59 +0000 (0:00:03.246) 0:03:29.673 ******** 2026-01-05 01:11:36.323116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:11:36.323128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.323132 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.323141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:11:36.323146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.323150 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.323154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:11:36.323161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.323165 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.323169 | orchestrator | 2026-01-05 01:11:36.323175 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-05 01:11:36.323179 | orchestrator | Monday 05 January 2026 01:05:01 +0000 (0:00:01.332) 0:03:31.006 ******** 2026-01-05 01:11:36.323188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:11:36.323192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:11:36.323196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.323206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.323212 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.323218 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.323262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:11:36.323272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.323278 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.323283 | orchestrator | 2026-01-05 01:11:36.323289 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-05 01:11:36.323295 | orchestrator | Monday 05 January 2026 01:05:03 +0000 (0:00:01.860) 0:03:32.867 ******** 2026-01-05 01:11:36.323302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323368 | orchestrator | 2026-01-05 01:11:36.323372 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-05 01:11:36.323376 | orchestrator | Monday 05 January 2026 01:05:07 +0000 (0:00:04.468) 0:03:37.336 ******** 2026-01-05 01:11:36.323384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323426 | orchestrator | 2026-01-05 01:11:36.323430 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-05 01:11:36.323434 | orchestrator | Monday 05 January 2026 01:05:19 +0000 (0:00:11.843) 0:03:49.179 ******** 2026-01-05 01:11:36.323438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:11:36.323445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.323449 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.323453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:11:36.323461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.323468 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.323472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:11:36.323479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.323483 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.323487 | orchestrator | 2026-01-05 01:11:36.323491 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-05 01:11:36.323516 | orchestrator | Monday 05 January 2026 01:05:20 +0000 (0:00:01.132) 0:03:50.311 ******** 2026-01-05 01:11:36.323520 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:36.323524 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.323528 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:36.323532 | orchestrator | 2026-01-05 01:11:36.323536 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-05 01:11:36.323539 | orchestrator | Monday 05 January 2026 01:05:22 +0000 (0:00:01.610) 0:03:51.922 ******** 2026-01-05 01:11:36.323543 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.323580 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.323584 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.323587 | orchestrator | 2026-01-05 01:11:36.323591 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-05 01:11:36.323595 | orchestrator | Monday 05 January 2026 01:05:22 +0000 (0:00:00.792) 0:03:52.714 ******** 2026-01-05 01:11:36.323603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:11:36.323624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.323639 | orchestrator | 2026-01-05 01:11:36.323912 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-05 01:11:36.323930 | orchestrator | Monday 05 January 2026 01:05:26 +0000 (0:00:03.229) 0:03:55.943 ******** 2026-01-05 01:11:36.323934 | orchestrator | 2026-01-05 01:11:36.323939 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-05 01:11:36.323943 | orchestrator | Monday 05 January 2026 01:05:26 +0000 (0:00:00.389) 0:03:56.333 ******** 2026-01-05 01:11:36.323947 | orchestrator | 2026-01-05 01:11:36.323950 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-05 01:11:36.323993 | orchestrator | Monday 05 January 2026 01:05:26 +0000 (0:00:00.339) 0:03:56.673 ******** 2026-01-05 01:11:36.323998 | orchestrator | 2026-01-05 01:11:36.324001 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-05 01:11:36.324005 | orchestrator | Monday 05 January 2026 01:05:27 +0000 (0:00:00.429) 0:03:57.102 ******** 2026-01-05 01:11:36.324009 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.324013 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:36.324017 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:36.324021 | orchestrator | 2026-01-05 01:11:36.324025 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-05 01:11:36.324029 | orchestrator | Monday 05 January 2026 01:05:46 +0000 (0:00:19.435) 0:04:16.538 ******** 2026-01-05 01:11:36.324032 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.324036 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:36.324040 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:36.324043 | orchestrator | 2026-01-05 01:11:36.324047 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-05 01:11:36.324051 | orchestrator | 2026-01-05 01:11:36.324055 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:11:36.324058 | orchestrator | Monday 05 January 2026 01:06:02 +0000 (0:00:15.419) 0:04:31.958 ******** 2026-01-05 01:11:36.324062 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:36.324067 | orchestrator | 2026-01-05 01:11:36.324073 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:11:36.324080 | orchestrator | Monday 05 January 2026 01:06:04 +0000 (0:00:02.270) 0:04:34.229 ******** 2026-01-05 01:11:36.324090 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.324096 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.324103 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.324110 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.324116 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.324122 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.324128 | orchestrator | 2026-01-05 01:11:36.324135 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-05 01:11:36.324142 | orchestrator | Monday 05 January 2026 01:06:05 +0000 (0:00:01.104) 0:04:35.334 ******** 2026-01-05 01:11:36.324148 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.324154 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.324161 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.324167 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:11:36.324173 | orchestrator | 2026-01-05 01:11:36.324181 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 01:11:36.324188 | orchestrator | Monday 05 January 2026 01:06:07 +0000 (0:00:01.525) 0:04:36.859 ******** 2026-01-05 01:11:36.324195 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-05 01:11:36.324202 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-05 01:11:36.324208 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-05 01:11:36.324215 | orchestrator | 2026-01-05 01:11:36.324221 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 01:11:36.324228 | orchestrator | Monday 05 January 2026 01:06:07 +0000 (0:00:00.778) 0:04:37.638 ******** 2026-01-05 01:11:36.324235 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-05 01:11:36.324241 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-05 01:11:36.324248 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-05 01:11:36.324254 | orchestrator | 2026-01-05 01:11:36.324260 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 01:11:36.324267 | orchestrator | Monday 05 January 2026 01:06:09 +0000 (0:00:01.423) 0:04:39.062 ******** 2026-01-05 01:11:36.324280 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-05 01:11:36.324286 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.324292 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-05 01:11:36.324298 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.324304 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-05 01:11:36.324310 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.324316 | orchestrator | 2026-01-05 01:11:36.324322 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-05 01:11:36.324328 | orchestrator | Monday 05 January 2026 01:06:10 +0000 (0:00:00.935) 0:04:39.998 ******** 2026-01-05 01:11:36.324341 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 01:11:36.324347 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 01:11:36.324354 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.324361 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 01:11:36.324367 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-05 01:11:36.324373 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 01:11:36.324380 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-05 01:11:36.324386 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.324400 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 01:11:36.324407 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-05 01:11:36.324413 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 01:11:36.324420 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.324427 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-05 01:11:36.324433 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-05 01:11:36.324439 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-05 01:11:36.324445 | orchestrator | 2026-01-05 01:11:36.324452 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-05 01:11:36.324458 | orchestrator | Monday 05 January 2026 01:06:11 +0000 (0:00:01.624) 0:04:41.623 ******** 2026-01-05 01:11:36.324465 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.324471 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.324477 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.324483 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.324490 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.324522 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.324528 | orchestrator | 2026-01-05 01:11:36.324535 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-05 01:11:36.324541 | orchestrator | Monday 05 January 2026 01:06:13 +0000 (0:00:01.537) 0:04:43.161 ******** 2026-01-05 01:11:36.324547 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.324560 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.324566 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.324571 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.324576 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.324582 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.324588 | orchestrator | 2026-01-05 01:11:36.324594 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-05 01:11:36.324600 | orchestrator | Monday 05 January 2026 01:06:15 +0000 (0:00:02.429) 0:04:45.591 ******** 2026-01-05 01:11:36.324607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324757 | orchestrator | 2026-01-05 01:11:36.324764 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:11:36.324771 | orchestrator | Monday 05 January 2026 01:06:19 +0000 (0:00:03.714) 0:04:49.305 ******** 2026-01-05 01:11:36.324777 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:36.324785 | orchestrator | 2026-01-05 01:11:36.324791 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-05 01:11:36.324801 | orchestrator | Monday 05 January 2026 01:06:22 +0000 (0:00:02.656) 0:04:51.962 ******** 2026-01-05 01:11:36.324960 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.324996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325043 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.325058 | orchestrator | 2026-01-05 01:11:36.325062 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-05 01:11:36.325066 | orchestrator | Monday 05 January 2026 01:06:28 +0000 (0:00:05.926) 0:04:57.888 ******** 2026-01-05 01:11:36.325070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.325077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.325085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.325092 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.325097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.325101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.325105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.325109 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.325115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:11:36.325779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.325809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.325813 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.325818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.325823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.325827 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.325832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:11:36.325844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.325850 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.325903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:11:36.325919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.325926 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.325931 | orchestrator | 2026-01-05 01:11:36.325938 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-05 01:11:36.325945 | orchestrator | Monday 05 January 2026 01:06:31 +0000 (0:00:03.650) 0:05:01.539 ******** 2026-01-05 01:11:36.325951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.325958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.325964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.325970 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.325980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:11:36.325998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.326004 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.326010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.326058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.326064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.326071 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.326078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.326095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.326102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.326182 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.326201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:11:36.326209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.326215 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.326221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:11:36.326229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.326241 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.326248 | orchestrator | 2026-01-05 01:11:36.326254 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:11:36.326260 | orchestrator | Monday 05 January 2026 01:06:35 +0000 (0:00:03.808) 0:05:05.348 ******** 2026-01-05 01:11:36.326266 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.326271 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.326278 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.326284 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:11:36.326290 | orchestrator | 2026-01-05 01:11:36.326297 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-05 01:11:36.326308 | orchestrator | Monday 05 January 2026 01:06:36 +0000 (0:00:01.320) 0:05:06.668 ******** 2026-01-05 01:11:36.326315 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 01:11:36.326322 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 01:11:36.326328 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 01:11:36.326336 | orchestrator | 2026-01-05 01:11:36.326343 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-05 01:11:36.326349 | orchestrator | Monday 05 January 2026 01:06:37 +0000 (0:00:01.180) 0:05:07.849 ******** 2026-01-05 01:11:36.326355 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 01:11:36.326361 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 01:11:36.326368 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 01:11:36.326374 | orchestrator | 2026-01-05 01:11:36.326381 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-05 01:11:36.326387 | orchestrator | Monday 05 January 2026 01:06:39 +0000 (0:00:01.653) 0:05:09.503 ******** 2026-01-05 01:11:36.326393 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:11:36.326400 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:11:36.326407 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:11:36.326414 | orchestrator | 2026-01-05 01:11:36.326421 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-05 01:11:36.326428 | orchestrator | Monday 05 January 2026 01:06:40 +0000 (0:00:00.450) 0:05:09.953 ******** 2026-01-05 01:11:36.326434 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:11:36.326441 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:11:36.326447 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:11:36.326453 | orchestrator | 2026-01-05 01:11:36.326460 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-05 01:11:36.326467 | orchestrator | Monday 05 January 2026 01:06:40 +0000 (0:00:00.868) 0:05:10.821 ******** 2026-01-05 01:11:36.326474 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-05 01:11:36.326480 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-05 01:11:36.326487 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-05 01:11:36.326494 | orchestrator | 2026-01-05 01:11:36.326502 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-05 01:11:36.326509 | orchestrator | Monday 05 January 2026 01:06:42 +0000 (0:00:01.277) 0:05:12.099 ******** 2026-01-05 01:11:36.326516 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-05 01:11:36.326523 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-05 01:11:36.326529 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-05 01:11:36.326533 | orchestrator | 2026-01-05 01:11:36.326538 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-05 01:11:36.326543 | orchestrator | Monday 05 January 2026 01:06:43 +0000 (0:00:01.109) 0:05:13.209 ******** 2026-01-05 01:11:36.326548 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-05 01:11:36.326552 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-05 01:11:36.326562 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-05 01:11:36.326566 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-05 01:11:36.326570 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-05 01:11:36.326575 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-05 01:11:36.326579 | orchestrator | 2026-01-05 01:11:36.326583 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-05 01:11:36.326588 | orchestrator | Monday 05 January 2026 01:06:47 +0000 (0:00:04.324) 0:05:17.533 ******** 2026-01-05 01:11:36.326592 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.326596 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.326601 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.326605 | orchestrator | 2026-01-05 01:11:36.326609 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-05 01:11:36.326614 | orchestrator | Monday 05 January 2026 01:06:48 +0000 (0:00:00.414) 0:05:17.947 ******** 2026-01-05 01:11:36.326618 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.326622 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.326627 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.326631 | orchestrator | 2026-01-05 01:11:36.326635 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-05 01:11:36.326640 | orchestrator | Monday 05 January 2026 01:06:48 +0000 (0:00:00.318) 0:05:18.266 ******** 2026-01-05 01:11:36.326644 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.326649 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.326653 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.326657 | orchestrator | 2026-01-05 01:11:36.326662 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-05 01:11:36.326666 | orchestrator | Monday 05 January 2026 01:06:49 +0000 (0:00:01.122) 0:05:19.389 ******** 2026-01-05 01:11:36.326672 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-05 01:11:36.326677 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-05 01:11:36.326686 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-05 01:11:36.326690 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-05 01:11:36.326695 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-05 01:11:36.326699 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-05 01:11:36.326704 | orchestrator | 2026-01-05 01:11:36.326712 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-05 01:11:36.326717 | orchestrator | Monday 05 January 2026 01:06:54 +0000 (0:00:04.891) 0:05:24.280 ******** 2026-01-05 01:11:36.326721 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:11:36.326726 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:11:36.326730 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:11:36.326734 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:11:36.326739 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.326743 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:11:36.326748 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.326752 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:11:36.326757 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.326761 | orchestrator | 2026-01-05 01:11:36.326766 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-05 01:11:36.326774 | orchestrator | Monday 05 January 2026 01:06:57 +0000 (0:00:03.500) 0:05:27.780 ******** 2026-01-05 01:11:36.326777 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.326781 | orchestrator | 2026-01-05 01:11:36.326785 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-05 01:11:36.326789 | orchestrator | Monday 05 January 2026 01:06:58 +0000 (0:00:00.170) 0:05:27.950 ******** 2026-01-05 01:11:36.326792 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.326796 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.326800 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.326803 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.326807 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.326811 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.326815 | orchestrator | 2026-01-05 01:11:36.326818 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-05 01:11:36.326822 | orchestrator | Monday 05 January 2026 01:06:58 +0000 (0:00:00.598) 0:05:28.549 ******** 2026-01-05 01:11:36.326826 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 01:11:36.326829 | orchestrator | 2026-01-05 01:11:36.326833 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-05 01:11:36.326837 | orchestrator | Monday 05 January 2026 01:06:59 +0000 (0:00:00.703) 0:05:29.253 ******** 2026-01-05 01:11:36.326841 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.326844 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.326848 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.326852 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.326855 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.326859 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.326885 | orchestrator | 2026-01-05 01:11:36.326889 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-05 01:11:36.326893 | orchestrator | Monday 05 January 2026 01:07:00 +0000 (0:00:00.901) 0:05:30.154 ******** 2026-01-05 01:11:36.326897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.326993 | orchestrator | 2026-01-05 01:11:36.326996 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-05 01:11:36.327000 | orchestrator | Monday 05 January 2026 01:07:04 +0000 (0:00:03.980) 0:05:34.135 ******** 2026-01-05 01:11:36.327004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.327008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.327012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.327016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.327042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.327053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.327057 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327103 | orchestrator | 2026-01-05 01:11:36.327107 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-05 01:11:36.327111 | orchestrator | Monday 05 January 2026 01:07:11 +0000 (0:00:07.235) 0:05:41.370 ******** 2026-01-05 01:11:36.327115 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.327119 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.327122 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.327131 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327134 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327138 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327142 | orchestrator | 2026-01-05 01:11:36.327146 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-05 01:11:36.327149 | orchestrator | Monday 05 January 2026 01:07:12 +0000 (0:00:01.476) 0:05:42.846 ******** 2026-01-05 01:11:36.327153 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-05 01:11:36.327157 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-05 01:11:36.327161 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-05 01:11:36.327165 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-05 01:11:36.327171 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-05 01:11:36.327175 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-05 01:11:36.327179 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327182 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-05 01:11:36.327186 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-05 01:11:36.327190 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327194 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-05 01:11:36.327197 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327203 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-05 01:11:36.327207 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-05 01:11:36.327211 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-05 01:11:36.327215 | orchestrator | 2026-01-05 01:11:36.327219 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-05 01:11:36.327222 | orchestrator | Monday 05 January 2026 01:07:17 +0000 (0:00:04.308) 0:05:47.155 ******** 2026-01-05 01:11:36.327226 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.327230 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.327234 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.327237 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327241 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327245 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327248 | orchestrator | 2026-01-05 01:11:36.327252 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-05 01:11:36.327256 | orchestrator | Monday 05 January 2026 01:07:17 +0000 (0:00:00.602) 0:05:47.757 ******** 2026-01-05 01:11:36.327259 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-05 01:11:36.327263 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-05 01:11:36.327267 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-05 01:11:36.327271 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-05 01:11:36.327274 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-05 01:11:36.327278 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-05 01:11:36.327282 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-05 01:11:36.327290 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-05 01:11:36.327293 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-05 01:11:36.327297 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-05 01:11:36.327301 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327304 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-05 01:11:36.327308 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327312 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-05 01:11:36.327316 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327319 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:11:36.327323 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:11:36.327327 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:11:36.327330 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:11:36.327334 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:11:36.327338 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:11:36.327341 | orchestrator | 2026-01-05 01:11:36.327345 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-05 01:11:36.327349 | orchestrator | Monday 05 January 2026 01:07:23 +0000 (0:00:05.608) 0:05:53.366 ******** 2026-01-05 01:11:36.327353 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 01:11:36.327357 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 01:11:36.327363 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 01:11:36.327367 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-05 01:11:36.327370 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:11:36.327374 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:11:36.327378 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-05 01:11:36.327381 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-05 01:11:36.327385 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:11:36.327392 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 01:11:36.327395 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 01:11:36.327399 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 01:11:36.327403 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-05 01:11:36.327407 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327410 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:11:36.327414 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-05 01:11:36.327418 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327421 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-05 01:11:36.327430 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327433 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:11:36.327437 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:11:36.327441 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:11:36.327445 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:11:36.327448 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:11:36.327452 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:11:36.327455 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:11:36.327459 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:11:36.327463 | orchestrator | 2026-01-05 01:11:36.327467 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-05 01:11:36.327470 | orchestrator | Monday 05 January 2026 01:07:30 +0000 (0:00:07.030) 0:06:00.397 ******** 2026-01-05 01:11:36.327474 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.327478 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.327482 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.327485 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327489 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327493 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327497 | orchestrator | 2026-01-05 01:11:36.327500 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-05 01:11:36.327504 | orchestrator | Monday 05 January 2026 01:07:31 +0000 (0:00:00.920) 0:06:01.317 ******** 2026-01-05 01:11:36.327508 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.327512 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.327516 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.327523 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327529 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327535 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327541 | orchestrator | 2026-01-05 01:11:36.327547 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-05 01:11:36.327553 | orchestrator | Monday 05 January 2026 01:07:32 +0000 (0:00:00.711) 0:06:02.028 ******** 2026-01-05 01:11:36.327559 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.327565 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327571 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327577 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327583 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.327589 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.327594 | orchestrator | 2026-01-05 01:11:36.327601 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-05 01:11:36.327607 | orchestrator | Monday 05 January 2026 01:07:35 +0000 (0:00:03.608) 0:06:05.637 ******** 2026-01-05 01:11:36.327617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.327634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.327639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.327643 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.327647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.327651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.327655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.327659 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.327668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:11:36.327676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:11:36.327680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.327684 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.327688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:11:36.327692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.327695 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:11:36.327709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.327713 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:11:36.327724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:11:36.327728 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327731 | orchestrator | 2026-01-05 01:11:36.327735 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-05 01:11:36.327739 | orchestrator | Monday 05 January 2026 01:07:38 +0000 (0:00:02.370) 0:06:08.007 ******** 2026-01-05 01:11:36.327743 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-05 01:11:36.327747 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-05 01:11:36.327750 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.327754 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-05 01:11:36.327759 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-05 01:11:36.327765 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.327771 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-05 01:11:36.327776 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-05 01:11:36.327787 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-05 01:11:36.327794 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-05 01:11:36.327800 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.327805 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-05 01:11:36.327811 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-05 01:11:36.327817 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.327823 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.327829 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-05 01:11:36.327835 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-05 01:11:36.327840 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.327851 | orchestrator | 2026-01-05 01:11:36.327856 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-05 01:11:36.327903 | orchestrator | Monday 05 January 2026 01:07:38 +0000 (0:00:00.697) 0:06:08.704 ******** 2026-01-05 01:11:36.327911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:11:36.327997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:11:36.328066 | orchestrator | 2026-01-05 01:11:36.328070 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:11:36.328074 | orchestrator | Monday 05 January 2026 01:07:42 +0000 (0:00:03.512) 0:06:12.217 ******** 2026-01-05 01:11:36.328077 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.328081 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.328085 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.328089 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.328093 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.328096 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.328100 | orchestrator | 2026-01-05 01:11:36.328104 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:11:36.328108 | orchestrator | Monday 05 January 2026 01:07:43 +0000 (0:00:00.854) 0:06:13.071 ******** 2026-01-05 01:11:36.328111 | orchestrator | 2026-01-05 01:11:36.328115 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:11:36.328119 | orchestrator | Monday 05 January 2026 01:07:43 +0000 (0:00:00.149) 0:06:13.220 ******** 2026-01-05 01:11:36.328128 | orchestrator | 2026-01-05 01:11:36.328131 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:11:36.328135 | orchestrator | Monday 05 January 2026 01:07:43 +0000 (0:00:00.139) 0:06:13.360 ******** 2026-01-05 01:11:36.328139 | orchestrator | 2026-01-05 01:11:36.328143 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:11:36.328146 | orchestrator | Monday 05 January 2026 01:07:43 +0000 (0:00:00.146) 0:06:13.506 ******** 2026-01-05 01:11:36.328150 | orchestrator | 2026-01-05 01:11:36.328154 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:11:36.328157 | orchestrator | Monday 05 January 2026 01:07:43 +0000 (0:00:00.159) 0:06:13.666 ******** 2026-01-05 01:11:36.328161 | orchestrator | 2026-01-05 01:11:36.328165 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:11:36.328169 | orchestrator | Monday 05 January 2026 01:07:43 +0000 (0:00:00.156) 0:06:13.822 ******** 2026-01-05 01:11:36.328172 | orchestrator | 2026-01-05 01:11:36.328176 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-05 01:11:36.328180 | orchestrator | Monday 05 January 2026 01:07:44 +0000 (0:00:00.436) 0:06:14.258 ******** 2026-01-05 01:11:36.328184 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:36.328187 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.328191 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:36.328195 | orchestrator | 2026-01-05 01:11:36.328198 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-05 01:11:36.328202 | orchestrator | Monday 05 January 2026 01:07:57 +0000 (0:00:12.868) 0:06:27.127 ******** 2026-01-05 01:11:36.328206 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.328210 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:36.328213 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:36.328217 | orchestrator | 2026-01-05 01:11:36.328221 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-05 01:11:36.328224 | orchestrator | Monday 05 January 2026 01:08:11 +0000 (0:00:14.718) 0:06:41.845 ******** 2026-01-05 01:11:36.328228 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.328232 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.328235 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.328239 | orchestrator | 2026-01-05 01:11:36.328243 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-05 01:11:36.328246 | orchestrator | Monday 05 January 2026 01:09:05 +0000 (0:00:53.433) 0:07:35.279 ******** 2026-01-05 01:11:36.328250 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.328256 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.328263 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.328273 | orchestrator | 2026-01-05 01:11:36.328280 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-05 01:11:36.328287 | orchestrator | Monday 05 January 2026 01:09:48 +0000 (0:00:42.774) 0:08:18.054 ******** 2026-01-05 01:11:36.328293 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.328300 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.328305 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.328311 | orchestrator | 2026-01-05 01:11:36.328322 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-05 01:11:36.328328 | orchestrator | Monday 05 January 2026 01:09:49 +0000 (0:00:00.811) 0:08:18.865 ******** 2026-01-05 01:11:36.328334 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.328340 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.328346 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.328352 | orchestrator | 2026-01-05 01:11:36.328358 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-05 01:11:36.328365 | orchestrator | Monday 05 January 2026 01:09:49 +0000 (0:00:00.774) 0:08:19.639 ******** 2026-01-05 01:11:36.328371 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:36.328377 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:36.328389 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:36.328396 | orchestrator | 2026-01-05 01:11:36.328403 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-05 01:11:36.328413 | orchestrator | Monday 05 January 2026 01:10:21 +0000 (0:00:31.703) 0:08:51.343 ******** 2026-01-05 01:11:36.328420 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.328426 | orchestrator | 2026-01-05 01:11:36.328432 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-05 01:11:36.328438 | orchestrator | Monday 05 January 2026 01:10:21 +0000 (0:00:00.129) 0:08:51.472 ******** 2026-01-05 01:11:36.328445 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.328451 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.328457 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.328463 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.328470 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.328477 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-05 01:11:36.328483 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:11:36.328489 | orchestrator | 2026-01-05 01:11:36.328495 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-05 01:11:36.328501 | orchestrator | Monday 05 January 2026 01:10:44 +0000 (0:00:23.339) 0:09:14.812 ******** 2026-01-05 01:11:36.328508 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.328514 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.328520 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.328527 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.328533 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.328539 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.328545 | orchestrator | 2026-01-05 01:11:36.328551 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-05 01:11:36.328557 | orchestrator | Monday 05 January 2026 01:10:55 +0000 (0:00:10.629) 0:09:25.442 ******** 2026-01-05 01:11:36.328563 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.328569 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.328576 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.328582 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.328588 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.328595 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-01-05 01:11:36.328600 | orchestrator | 2026-01-05 01:11:36.328606 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-05 01:11:36.328613 | orchestrator | Monday 05 January 2026 01:10:59 +0000 (0:00:04.067) 0:09:29.510 ******** 2026-01-05 01:11:36.328620 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:11:36.328626 | orchestrator | 2026-01-05 01:11:36.328632 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-05 01:11:36.328638 | orchestrator | Monday 05 January 2026 01:11:12 +0000 (0:00:13.077) 0:09:42.587 ******** 2026-01-05 01:11:36.328644 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:11:36.328650 | orchestrator | 2026-01-05 01:11:36.328656 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-05 01:11:36.328662 | orchestrator | Monday 05 January 2026 01:11:14 +0000 (0:00:01.416) 0:09:44.004 ******** 2026-01-05 01:11:36.328669 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.328674 | orchestrator | 2026-01-05 01:11:36.328681 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-05 01:11:36.328687 | orchestrator | Monday 05 January 2026 01:11:15 +0000 (0:00:01.365) 0:09:45.369 ******** 2026-01-05 01:11:36.328694 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:11:36.328700 | orchestrator | 2026-01-05 01:11:36.328706 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-05 01:11:36.328719 | orchestrator | Monday 05 January 2026 01:11:27 +0000 (0:00:11.949) 0:09:57.318 ******** 2026-01-05 01:11:36.328725 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:11:36.328731 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:11:36.328738 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:11:36.328744 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:36.328750 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:36.328757 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:36.328763 | orchestrator | 2026-01-05 01:11:36.328769 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-05 01:11:36.328775 | orchestrator | 2026-01-05 01:11:36.328781 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-05 01:11:36.328788 | orchestrator | Monday 05 January 2026 01:11:29 +0000 (0:00:01.772) 0:09:59.091 ******** 2026-01-05 01:11:36.328794 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:36.328801 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:36.328807 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:36.328813 | orchestrator | 2026-01-05 01:11:36.328819 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-05 01:11:36.328825 | orchestrator | 2026-01-05 01:11:36.328832 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-05 01:11:36.328838 | orchestrator | Monday 05 January 2026 01:11:30 +0000 (0:00:01.188) 0:10:00.279 ******** 2026-01-05 01:11:36.328845 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.328851 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.328858 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.328882 | orchestrator | 2026-01-05 01:11:36.328894 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-05 01:11:36.328901 | orchestrator | 2026-01-05 01:11:36.328907 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-05 01:11:36.328914 | orchestrator | Monday 05 January 2026 01:11:30 +0000 (0:00:00.534) 0:10:00.814 ******** 2026-01-05 01:11:36.328920 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-05 01:11:36.328926 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-05 01:11:36.328932 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-05 01:11:36.328938 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-05 01:11:36.328947 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-05 01:11:36.328963 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-05 01:11:36.328969 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-05 01:11:36.328975 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-05 01:11:36.328981 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-05 01:11:36.328987 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-05 01:11:36.328992 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-05 01:11:36.328998 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-05 01:11:36.329004 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:36.329009 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-05 01:11:36.329016 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-05 01:11:36.329022 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-05 01:11:36.329027 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-05 01:11:36.329032 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-05 01:11:36.329039 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-05 01:11:36.329045 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:36.329051 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-05 01:11:36.329057 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-05 01:11:36.329063 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-05 01:11:36.329082 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-05 01:11:36.329089 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-05 01:11:36.329093 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-05 01:11:36.329097 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:36.329101 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-05 01:11:36.329104 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-05 01:11:36.329108 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-05 01:11:36.329112 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-05 01:11:36.329115 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-05 01:11:36.329119 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-05 01:11:36.329123 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.329127 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.329130 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-05 01:11:36.329134 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-05 01:11:36.329138 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-05 01:11:36.329143 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-05 01:11:36.329149 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-05 01:11:36.329155 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-05 01:11:36.329161 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.329167 | orchestrator | 2026-01-05 01:11:36.329173 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-05 01:11:36.329179 | orchestrator | 2026-01-05 01:11:36.329186 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-05 01:11:36.329192 | orchestrator | Monday 05 January 2026 01:11:32 +0000 (0:00:01.425) 0:10:02.240 ******** 2026-01-05 01:11:36.329197 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-05 01:11:36.329204 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-05 01:11:36.329210 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.329217 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-05 01:11:36.329223 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-05 01:11:36.329229 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.329237 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-05 01:11:36.329241 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-05 01:11:36.329245 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.329248 | orchestrator | 2026-01-05 01:11:36.329252 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-05 01:11:36.329256 | orchestrator | 2026-01-05 01:11:36.329260 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-05 01:11:36.329263 | orchestrator | Monday 05 January 2026 01:11:33 +0000 (0:00:00.762) 0:10:03.002 ******** 2026-01-05 01:11:36.329267 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.329271 | orchestrator | 2026-01-05 01:11:36.329274 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-05 01:11:36.329278 | orchestrator | 2026-01-05 01:11:36.329282 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-05 01:11:36.329286 | orchestrator | Monday 05 January 2026 01:11:33 +0000 (0:00:00.735) 0:10:03.738 ******** 2026-01-05 01:11:36.329293 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:36.329297 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:36.329301 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:36.329304 | orchestrator | 2026-01-05 01:11:36.329308 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:11:36.329316 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:36.329321 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-05 01:11:36.329329 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-05 01:11:36.329333 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-05 01:11:36.329337 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-05 01:11:36.329341 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-05 01:11:36.329345 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-05 01:11:36.329348 | orchestrator | 2026-01-05 01:11:36.329352 | orchestrator | 2026-01-05 01:11:36.329356 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:11:36.329360 | orchestrator | Monday 05 January 2026 01:11:34 +0000 (0:00:00.469) 0:10:04.208 ******** 2026-01-05 01:11:36.329364 | orchestrator | =============================================================================== 2026-01-05 01:11:36.329368 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 53.43s 2026-01-05 01:11:36.329371 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.77s 2026-01-05 01:11:36.329375 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.70s 2026-01-05 01:11:36.329379 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 27.50s 2026-01-05 01:11:36.329382 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.34s 2026-01-05 01:11:36.329386 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.74s 2026-01-05 01:11:36.329390 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.44s 2026-01-05 01:11:36.329393 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.41s 2026-01-05 01:11:36.329397 | orchestrator | nova : Restart nova-api container -------------------------------------- 15.42s 2026-01-05 01:11:36.329401 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.72s 2026-01-05 01:11:36.329405 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.31s 2026-01-05 01:11:36.329408 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.33s 2026-01-05 01:11:36.329412 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.09s 2026-01-05 01:11:36.329416 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.08s 2026-01-05 01:11:36.329419 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.87s 2026-01-05 01:11:36.329423 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.61s 2026-01-05 01:11:36.329427 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.95s 2026-01-05 01:11:36.329430 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 11.84s 2026-01-05 01:11:36.329434 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.63s 2026-01-05 01:11:36.329438 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.14s 2026-01-05 01:11:36.329442 | orchestrator | 2026-01-05 01:11:36 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:36.329446 | orchestrator | 2026-01-05 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:39.366350 | orchestrator | 2026-01-05 01:11:39 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:39.369336 | orchestrator | 2026-01-05 01:11:39 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:39.369462 | orchestrator | 2026-01-05 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:42.416148 | orchestrator | 2026-01-05 01:11:42 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:42.418493 | orchestrator | 2026-01-05 01:11:42 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:42.418549 | orchestrator | 2026-01-05 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:45.464395 | orchestrator | 2026-01-05 01:11:45 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state STARTED 2026-01-05 01:11:45.466992 | orchestrator | 2026-01-05 01:11:45 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:45.467351 | orchestrator | 2026-01-05 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:48.513807 | orchestrator | 2026-01-05 01:11:48 | INFO  | Task a58d152d-c94c-42a3-a1e1-ef379993e021 is in state SUCCESS 2026-01-05 01:11:48.514961 | orchestrator | 2026-01-05 01:11:48.515019 | orchestrator | 2026-01-05 01:11:48.515026 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:11:48.515032 | orchestrator | 2026-01-05 01:11:48.515038 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:11:48.515043 | orchestrator | Monday 05 January 2026 01:09:11 +0000 (0:00:00.216) 0:00:00.216 ******** 2026-01-05 01:11:48.515048 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:48.515054 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:48.515059 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:48.515063 | orchestrator | 2026-01-05 01:11:48.515067 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:11:48.515071 | orchestrator | Monday 05 January 2026 01:09:11 +0000 (0:00:00.290) 0:00:00.507 ******** 2026-01-05 01:11:48.515075 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-05 01:11:48.515080 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-05 01:11:48.515083 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-05 01:11:48.515087 | orchestrator | 2026-01-05 01:11:48.515091 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-05 01:11:48.515095 | orchestrator | 2026-01-05 01:11:48.515098 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-05 01:11:48.515102 | orchestrator | Monday 05 January 2026 01:09:11 +0000 (0:00:00.420) 0:00:00.927 ******** 2026-01-05 01:11:48.515106 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:48.515111 | orchestrator | 2026-01-05 01:11:48.515115 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-05 01:11:48.515118 | orchestrator | Monday 05 January 2026 01:09:12 +0000 (0:00:00.590) 0:00:01.518 ******** 2026-01-05 01:11:48.515125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515194 | orchestrator | 2026-01-05 01:11:48.515198 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-05 01:11:48.515202 | orchestrator | Monday 05 January 2026 01:09:13 +0000 (0:00:00.898) 0:00:02.417 ******** 2026-01-05 01:11:48.515222 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-05 01:11:48.515226 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-05 01:11:48.515230 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:11:48.515234 | orchestrator | 2026-01-05 01:11:48.515247 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-05 01:11:48.515252 | orchestrator | Monday 05 January 2026 01:09:14 +0000 (0:00:00.897) 0:00:03.314 ******** 2026-01-05 01:11:48.515256 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:48.515277 | orchestrator | 2026-01-05 01:11:48.515282 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-05 01:11:48.515287 | orchestrator | Monday 05 January 2026 01:09:14 +0000 (0:00:00.740) 0:00:04.055 ******** 2026-01-05 01:11:48.515300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515317 | orchestrator | 2026-01-05 01:11:48.515321 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-05 01:11:48.515325 | orchestrator | Monday 05 January 2026 01:09:16 +0000 (0:00:01.585) 0:00:05.640 ******** 2026-01-05 01:11:48.515329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:48.515333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:48.515337 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:48.515341 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:48.515354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:48.515374 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:48.515378 | orchestrator | 2026-01-05 01:11:48.515382 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-05 01:11:48.515386 | orchestrator | Monday 05 January 2026 01:09:16 +0000 (0:00:00.392) 0:00:06.033 ******** 2026-01-05 01:11:48.515390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:48.515398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:48.515403 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:48.515410 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:48.515417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:48.515424 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:48.515431 | orchestrator | 2026-01-05 01:11:48.515440 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-05 01:11:48.515500 | orchestrator | Monday 05 January 2026 01:09:17 +0000 (0:00:00.827) 0:00:06.861 ******** 2026-01-05 01:11:48.515508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515545 | orchestrator | 2026-01-05 01:11:48.515552 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-05 01:11:48.515558 | orchestrator | Monday 05 January 2026 01:09:18 +0000 (0:00:01.260) 0:00:08.121 ******** 2026-01-05 01:11:48.515564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.515583 | orchestrator | 2026-01-05 01:11:48.515589 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-05 01:11:48.515595 | orchestrator | Monday 05 January 2026 01:09:20 +0000 (0:00:01.387) 0:00:09.509 ******** 2026-01-05 01:11:48.515601 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:48.515606 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:48.515613 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:48.515618 | orchestrator | 2026-01-05 01:11:48.515624 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-05 01:11:48.515631 | orchestrator | Monday 05 January 2026 01:09:20 +0000 (0:00:00.547) 0:00:10.056 ******** 2026-01-05 01:11:48.515637 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 01:11:48.515643 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 01:11:48.515649 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 01:11:48.515654 | orchestrator | 2026-01-05 01:11:48.515664 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-05 01:11:48.515671 | orchestrator | Monday 05 January 2026 01:09:22 +0000 (0:00:01.272) 0:00:11.329 ******** 2026-01-05 01:11:48.515677 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 01:11:48.515686 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 01:11:48.515697 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 01:11:48.515705 | orchestrator | 2026-01-05 01:11:48.515710 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-05 01:11:48.515716 | orchestrator | Monday 05 January 2026 01:09:23 +0000 (0:00:01.285) 0:00:12.615 ******** 2026-01-05 01:11:48.515722 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:11:48.515728 | orchestrator | 2026-01-05 01:11:48.515734 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-05 01:11:48.515740 | orchestrator | Monday 05 January 2026 01:09:24 +0000 (0:00:00.801) 0:00:13.416 ******** 2026-01-05 01:11:48.515747 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-05 01:11:48.515752 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-05 01:11:48.515758 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:48.515765 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:48.515770 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:48.515777 | orchestrator | 2026-01-05 01:11:48.515783 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-05 01:11:48.515789 | orchestrator | Monday 05 January 2026 01:09:24 +0000 (0:00:00.687) 0:00:14.103 ******** 2026-01-05 01:11:48.515795 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:48.515802 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:48.515808 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:48.515814 | orchestrator | 2026-01-05 01:11:48.515820 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-05 01:11:48.515826 | orchestrator | Monday 05 January 2026 01:09:25 +0000 (0:00:00.681) 0:00:14.785 ******** 2026-01-05 01:11:48.515833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098262, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.2927754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098262, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.2927754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098262, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.2927754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098355, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3283842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098355, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3283842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098355, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3283842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098291, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3070261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098291, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3070261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098291, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3070261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098358, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.331086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098358, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.331086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098358, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.331086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098300, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3125498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098300, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3125498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098300, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3125498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098313, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3248727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.515998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098313, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3248727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098313, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3248727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098261, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.2925375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098261, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.2925375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098261, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.2925375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098263, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3064213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098263, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3064213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098263, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3064213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098292, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3080263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098292, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3080263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098292, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3080263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098305, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.314015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098305, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.314015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098305, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.314015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098353, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3275776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098353, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3275776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098353, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3275776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098289, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3070261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098289, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3070261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098289, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3070261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098310, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.315141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098310, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.315141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098310, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.315141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098303, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3131955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098303, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3131955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098303, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3131955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098297, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3120263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098297, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3120263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098297, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3120263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098295, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3100262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098295, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3100262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098295, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3100262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098307, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3146265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098307, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3146265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098307, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3146265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098293, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3100262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098293, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3100262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098293, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3100262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098351, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3260264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098351, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3260264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098351, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3260264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098440, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.513028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098440, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.513028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098440, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.513028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098374, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3540266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098374, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3540266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098374, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3540266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098368, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3340266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098368, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3340266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098368, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3340266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098390, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3574333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098390, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3574333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098390, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3574333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098365, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.33168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098365, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.33168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098365, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.33168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098419, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3783126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098419, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3783126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098419, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3783126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098391, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3747811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098391, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3747811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098391, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3747811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098422, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3792715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098422, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3792715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098422, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3792715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098437, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3841386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098437, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3841386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098437, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3841386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098415, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3768132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098415, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3768132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098415, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3768132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098387, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3550267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098387, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3550267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098387, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3550267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098373, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3400266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098373, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3400266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098373, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3400266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098386, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3550267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098386, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3550267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098386, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3550267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098369, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3370266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098369, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3370266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098369, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3370266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098389, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3568523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098389, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3568523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098389, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3568523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098432, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3834713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098432, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3834713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098432, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3834713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098429, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3816085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098429, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3816085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098429, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3816085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098366, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.332011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098366, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.332011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098366, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.332011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.516993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098367, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3330264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.517026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098367, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3330264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.517049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098367, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3330264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.517068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098413, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.375198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.517093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098413, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.375198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.517110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098413, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.375198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.517150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098425, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3799403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.517178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098425, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3799403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.517195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098425, 'dev': 113, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572178.3799403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:48.517210 | orchestrator | 2026-01-05 01:11:48.517227 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-05 01:11:48.517245 | orchestrator | Monday 05 January 2026 01:10:02 +0000 (0:00:36.984) 0:00:51.770 ******** 2026-01-05 01:11:48.517262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.517277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.517304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:48.517323 | orchestrator | 2026-01-05 01:11:48.517341 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-05 01:11:48.517372 | orchestrator | Monday 05 January 2026 01:10:03 +0000 (0:00:01.133) 0:00:52.904 ******** 2026-01-05 01:11:48.517387 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:48.517400 | orchestrator | 2026-01-05 01:11:48.517410 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-05 01:11:48.517487 | orchestrator | Monday 05 January 2026 01:10:05 +0000 (0:00:02.203) 0:00:55.107 ******** 2026-01-05 01:11:48.517500 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:48.517513 | orchestrator | 2026-01-05 01:11:48.517524 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 01:11:48.517536 | orchestrator | Monday 05 January 2026 01:10:08 +0000 (0:00:02.377) 0:00:57.485 ******** 2026-01-05 01:11:48.517547 | orchestrator | 2026-01-05 01:11:48.517559 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 01:11:48.517571 | orchestrator | Monday 05 January 2026 01:10:08 +0000 (0:00:00.072) 0:00:57.557 ******** 2026-01-05 01:11:48.517584 | orchestrator | 2026-01-05 01:11:48.517596 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 01:11:48.517608 | orchestrator | Monday 05 January 2026 01:10:08 +0000 (0:00:00.067) 0:00:57.625 ******** 2026-01-05 01:11:48.517621 | orchestrator | 2026-01-05 01:11:48.517633 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-05 01:11:48.517644 | orchestrator | Monday 05 January 2026 01:10:08 +0000 (0:00:00.289) 0:00:57.915 ******** 2026-01-05 01:11:48.517656 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:48.517667 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:48.517678 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:48.517690 | orchestrator | 2026-01-05 01:11:48.517701 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-05 01:11:48.517713 | orchestrator | Monday 05 January 2026 01:10:15 +0000 (0:00:06.866) 0:01:04.781 ******** 2026-01-05 01:11:48.517725 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:48.517735 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:48.517747 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-05 01:11:48.517760 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-05 01:11:48.517772 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-05 01:11:48.517784 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-05 01:11:48.517796 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:48.517808 | orchestrator | 2026-01-05 01:11:48.517815 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-05 01:11:48.517821 | orchestrator | Monday 05 January 2026 01:11:05 +0000 (0:00:50.402) 0:01:55.184 ******** 2026-01-05 01:11:48.517827 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:48.517832 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:48.517838 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:48.517869 | orchestrator | 2026-01-05 01:11:48.517877 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-05 01:11:48.517883 | orchestrator | Monday 05 January 2026 01:11:40 +0000 (0:00:34.085) 0:02:29.269 ******** 2026-01-05 01:11:48.517889 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:48.517894 | orchestrator | 2026-01-05 01:11:48.517900 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-05 01:11:48.517906 | orchestrator | Monday 05 January 2026 01:11:42 +0000 (0:00:02.257) 0:02:31.527 ******** 2026-01-05 01:11:48.517911 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:48.517917 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:48.517923 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:48.517929 | orchestrator | 2026-01-05 01:11:48.517935 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-05 01:11:48.517941 | orchestrator | Monday 05 January 2026 01:11:42 +0000 (0:00:00.505) 0:02:32.032 ******** 2026-01-05 01:11:48.517957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-05 01:11:48.517966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-05 01:11:48.517973 | orchestrator | 2026-01-05 01:11:48.517980 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-05 01:11:48.517986 | orchestrator | Monday 05 January 2026 01:11:45 +0000 (0:00:02.407) 0:02:34.439 ******** 2026-01-05 01:11:48.517992 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:48.517997 | orchestrator | 2026-01-05 01:11:48.518003 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:11:48.518012 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:11:48.518066 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:11:48.518070 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:11:48.518074 | orchestrator | 2026-01-05 01:11:48.518078 | orchestrator | 2026-01-05 01:11:48.518082 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:11:48.518086 | orchestrator | Monday 05 January 2026 01:11:45 +0000 (0:00:00.282) 0:02:34.722 ******** 2026-01-05 01:11:48.518098 | orchestrator | =============================================================================== 2026-01-05 01:11:48.518102 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.40s 2026-01-05 01:11:48.518105 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.98s 2026-01-05 01:11:48.518109 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.09s 2026-01-05 01:11:48.518113 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.87s 2026-01-05 01:11:48.518116 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.41s 2026-01-05 01:11:48.518120 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.38s 2026-01-05 01:11:48.518124 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.26s 2026-01-05 01:11:48.518127 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.20s 2026-01-05 01:11:48.518131 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.59s 2026-01-05 01:11:48.518135 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.39s 2026-01-05 01:11:48.518138 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.29s 2026-01-05 01:11:48.518144 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2026-01-05 01:11:48.518150 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2026-01-05 01:11:48.518156 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.13s 2026-01-05 01:11:48.518163 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.90s 2026-01-05 01:11:48.518169 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.90s 2026-01-05 01:11:48.518175 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.83s 2026-01-05 01:11:48.518181 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.80s 2026-01-05 01:11:48.518194 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2026-01-05 01:11:48.518200 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2026-01-05 01:11:48.518452 | orchestrator | 2026-01-05 01:11:48 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:48.518544 | orchestrator | 2026-01-05 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:51.565447 | orchestrator | 2026-01-05 01:11:51 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:51.565538 | orchestrator | 2026-01-05 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:54.612922 | orchestrator | 2026-01-05 01:11:54 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:54.613026 | orchestrator | 2026-01-05 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:57.659438 | orchestrator | 2026-01-05 01:11:57 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:11:57.659566 | orchestrator | 2026-01-05 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:00.702975 | orchestrator | 2026-01-05 01:12:00 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:00.703084 | orchestrator | 2026-01-05 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:03.741949 | orchestrator | 2026-01-05 01:12:03 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:03.742077 | orchestrator | 2026-01-05 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:06.783428 | orchestrator | 2026-01-05 01:12:06 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:06.783545 | orchestrator | 2026-01-05 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:09.851221 | orchestrator | 2026-01-05 01:12:09 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:09.851313 | orchestrator | 2026-01-05 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:12.891801 | orchestrator | 2026-01-05 01:12:12 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:12.891933 | orchestrator | 2026-01-05 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:15.941981 | orchestrator | 2026-01-05 01:12:15 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:15.942130 | orchestrator | 2026-01-05 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:18.992164 | orchestrator | 2026-01-05 01:12:18 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:18.992275 | orchestrator | 2026-01-05 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:22.043188 | orchestrator | 2026-01-05 01:12:22 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:22.043268 | orchestrator | 2026-01-05 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:25.083341 | orchestrator | 2026-01-05 01:12:25 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:25.083424 | orchestrator | 2026-01-05 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:28.123904 | orchestrator | 2026-01-05 01:12:28 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:28.124038 | orchestrator | 2026-01-05 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:31.170185 | orchestrator | 2026-01-05 01:12:31 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:31.170279 | orchestrator | 2026-01-05 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:34.218523 | orchestrator | 2026-01-05 01:12:34 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:34.218635 | orchestrator | 2026-01-05 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:37.260392 | orchestrator | 2026-01-05 01:12:37 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:37.260534 | orchestrator | 2026-01-05 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:40.305208 | orchestrator | 2026-01-05 01:12:40 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:40.305312 | orchestrator | 2026-01-05 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:43.335959 | orchestrator | 2026-01-05 01:12:43 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:43.336117 | orchestrator | 2026-01-05 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:46.389885 | orchestrator | 2026-01-05 01:12:46 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:46.389980 | orchestrator | 2026-01-05 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:49.433247 | orchestrator | 2026-01-05 01:12:49 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:49.433387 | orchestrator | 2026-01-05 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:52.480109 | orchestrator | 2026-01-05 01:12:52 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:52.480258 | orchestrator | 2026-01-05 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:55.523795 | orchestrator | 2026-01-05 01:12:55 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:55.523893 | orchestrator | 2026-01-05 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:58.571673 | orchestrator | 2026-01-05 01:12:58 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:12:58.571812 | orchestrator | 2026-01-05 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:01.618270 | orchestrator | 2026-01-05 01:13:01 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:01.618373 | orchestrator | 2026-01-05 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:04.662521 | orchestrator | 2026-01-05 01:13:04 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:04.662597 | orchestrator | 2026-01-05 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:07.704472 | orchestrator | 2026-01-05 01:13:07 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:07.704614 | orchestrator | 2026-01-05 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:10.751665 | orchestrator | 2026-01-05 01:13:10 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:10.751815 | orchestrator | 2026-01-05 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:13.789833 | orchestrator | 2026-01-05 01:13:13 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:13.789943 | orchestrator | 2026-01-05 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:16.826485 | orchestrator | 2026-01-05 01:13:16 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:16.826593 | orchestrator | 2026-01-05 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:19.868040 | orchestrator | 2026-01-05 01:13:19 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:19.868200 | orchestrator | 2026-01-05 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:22.905487 | orchestrator | 2026-01-05 01:13:22 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:22.905590 | orchestrator | 2026-01-05 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:25.960285 | orchestrator | 2026-01-05 01:13:25 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:25.960379 | orchestrator | 2026-01-05 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:29.003102 | orchestrator | 2026-01-05 01:13:29 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:29.003218 | orchestrator | 2026-01-05 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:32.057950 | orchestrator | 2026-01-05 01:13:32 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:32.058083 | orchestrator | 2026-01-05 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:35.101832 | orchestrator | 2026-01-05 01:13:35 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:35.101942 | orchestrator | 2026-01-05 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:38.145102 | orchestrator | 2026-01-05 01:13:38 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:38.145256 | orchestrator | 2026-01-05 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:41.183889 | orchestrator | 2026-01-05 01:13:41 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:41.183983 | orchestrator | 2026-01-05 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:44.234638 | orchestrator | 2026-01-05 01:13:44 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:44.235611 | orchestrator | 2026-01-05 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:47.285994 | orchestrator | 2026-01-05 01:13:47 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:47.286104 | orchestrator | 2026-01-05 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:50.330958 | orchestrator | 2026-01-05 01:13:50 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:50.331105 | orchestrator | 2026-01-05 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:53.382730 | orchestrator | 2026-01-05 01:13:53 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:53.382819 | orchestrator | 2026-01-05 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:56.424042 | orchestrator | 2026-01-05 01:13:56 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:56.424163 | orchestrator | 2026-01-05 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:59.478279 | orchestrator | 2026-01-05 01:13:59 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:13:59.478357 | orchestrator | 2026-01-05 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:02.527748 | orchestrator | 2026-01-05 01:14:02 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:02.527884 | orchestrator | 2026-01-05 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:05.573136 | orchestrator | 2026-01-05 01:14:05 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:05.573240 | orchestrator | 2026-01-05 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:08.613821 | orchestrator | 2026-01-05 01:14:08 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:08.613939 | orchestrator | 2026-01-05 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:11.673548 | orchestrator | 2026-01-05 01:14:11 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:11.673648 | orchestrator | 2026-01-05 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:14.737558 | orchestrator | 2026-01-05 01:14:14 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:14.737722 | orchestrator | 2026-01-05 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:17.777422 | orchestrator | 2026-01-05 01:14:17 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:17.777544 | orchestrator | 2026-01-05 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:20.823856 | orchestrator | 2026-01-05 01:14:20 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:20.823950 | orchestrator | 2026-01-05 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:23.866986 | orchestrator | 2026-01-05 01:14:23 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:23.867094 | orchestrator | 2026-01-05 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:26.912001 | orchestrator | 2026-01-05 01:14:26 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:26.912101 | orchestrator | 2026-01-05 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:29.950903 | orchestrator | 2026-01-05 01:14:29 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:29.951066 | orchestrator | 2026-01-05 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:33.005789 | orchestrator | 2026-01-05 01:14:33 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:33.005914 | orchestrator | 2026-01-05 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:36.056005 | orchestrator | 2026-01-05 01:14:36 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:36.056129 | orchestrator | 2026-01-05 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:39.093728 | orchestrator | 2026-01-05 01:14:39 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:39.093829 | orchestrator | 2026-01-05 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:42.146526 | orchestrator | 2026-01-05 01:14:42 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:42.146685 | orchestrator | 2026-01-05 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:45.190773 | orchestrator | 2026-01-05 01:14:45 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:45.190973 | orchestrator | 2026-01-05 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:48.246231 | orchestrator | 2026-01-05 01:14:48 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:48.246325 | orchestrator | 2026-01-05 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:51.304760 | orchestrator | 2026-01-05 01:14:51 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:51.304915 | orchestrator | 2026-01-05 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:54.348174 | orchestrator | 2026-01-05 01:14:54 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:54.348249 | orchestrator | 2026-01-05 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:57.395451 | orchestrator | 2026-01-05 01:14:57 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:14:57.395550 | orchestrator | 2026-01-05 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:00.435749 | orchestrator | 2026-01-05 01:15:00 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:00.435855 | orchestrator | 2026-01-05 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:03.476200 | orchestrator | 2026-01-05 01:15:03 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:03.476320 | orchestrator | 2026-01-05 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:06.529552 | orchestrator | 2026-01-05 01:15:06 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:06.529717 | orchestrator | 2026-01-05 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:09.575974 | orchestrator | 2026-01-05 01:15:09 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:09.576100 | orchestrator | 2026-01-05 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:12.616660 | orchestrator | 2026-01-05 01:15:12 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:12.616745 | orchestrator | 2026-01-05 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:15.657735 | orchestrator | 2026-01-05 01:15:15 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:15.657847 | orchestrator | 2026-01-05 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:18.695934 | orchestrator | 2026-01-05 01:15:18 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:18.696056 | orchestrator | 2026-01-05 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:21.740286 | orchestrator | 2026-01-05 01:15:21 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:21.740400 | orchestrator | 2026-01-05 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:24.788884 | orchestrator | 2026-01-05 01:15:24 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:24.788978 | orchestrator | 2026-01-05 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:27.829793 | orchestrator | 2026-01-05 01:15:27 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:27.829872 | orchestrator | 2026-01-05 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:30.876007 | orchestrator | 2026-01-05 01:15:30 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:30.876103 | orchestrator | 2026-01-05 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:33.930328 | orchestrator | 2026-01-05 01:15:33 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:33.930435 | orchestrator | 2026-01-05 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:36.979486 | orchestrator | 2026-01-05 01:15:36 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:36.979611 | orchestrator | 2026-01-05 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:40.025508 | orchestrator | 2026-01-05 01:15:40 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state STARTED 2026-01-05 01:15:40.025610 | orchestrator | 2026-01-05 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:43.070164 | orchestrator | 2026-01-05 01:15:43 | INFO  | Task 01acc458-fe1c-499d-83dc-6b0f2bc066fc is in state SUCCESS 2026-01-05 01:15:43.071207 | orchestrator | 2026-01-05 01:15:43.071240 | orchestrator | 2026-01-05 01:15:43.071246 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:15:43.071252 | orchestrator | 2026-01-05 01:15:43.071257 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:15:43.071262 | orchestrator | Monday 05 January 2026 01:11:01 +0000 (0:00:00.235) 0:00:00.235 ******** 2026-01-05 01:15:43.071266 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.071272 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:43.071277 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:43.071282 | orchestrator | 2026-01-05 01:15:43.071287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:15:43.071292 | orchestrator | Monday 05 January 2026 01:11:01 +0000 (0:00:00.279) 0:00:00.515 ******** 2026-01-05 01:15:43.071297 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-05 01:15:43.071302 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-05 01:15:43.071306 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-05 01:15:43.071311 | orchestrator | 2026-01-05 01:15:43.071315 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-05 01:15:43.071320 | orchestrator | 2026-01-05 01:15:43.071324 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 01:15:43.071328 | orchestrator | Monday 05 January 2026 01:11:01 +0000 (0:00:00.486) 0:00:01.002 ******** 2026-01-05 01:15:43.071333 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:15:43.071339 | orchestrator | 2026-01-05 01:15:43.071343 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-05 01:15:43.071347 | orchestrator | Monday 05 January 2026 01:11:02 +0000 (0:00:00.614) 0:00:01.616 ******** 2026-01-05 01:15:43.071353 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-05 01:15:43.071357 | orchestrator | 2026-01-05 01:15:43.071361 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-05 01:15:43.071366 | orchestrator | Monday 05 January 2026 01:11:05 +0000 (0:00:03.241) 0:00:04.858 ******** 2026-01-05 01:15:43.071370 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-05 01:15:43.071375 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-05 01:15:43.071380 | orchestrator | 2026-01-05 01:15:43.071397 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-05 01:15:43.071401 | orchestrator | Monday 05 January 2026 01:11:12 +0000 (0:00:06.490) 0:00:11.348 ******** 2026-01-05 01:15:43.071406 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:15:43.071410 | orchestrator | 2026-01-05 01:15:43.071415 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-05 01:15:43.071419 | orchestrator | Monday 05 January 2026 01:11:15 +0000 (0:00:03.281) 0:00:14.629 ******** 2026-01-05 01:15:43.071439 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:15:43.071444 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-05 01:15:43.071449 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-05 01:15:43.071453 | orchestrator | 2026-01-05 01:15:43.071457 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-05 01:15:43.071462 | orchestrator | Monday 05 January 2026 01:11:23 +0000 (0:00:08.022) 0:00:22.652 ******** 2026-01-05 01:15:43.071466 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:15:43.071471 | orchestrator | 2026-01-05 01:15:43.071475 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-05 01:15:43.071479 | orchestrator | Monday 05 January 2026 01:11:27 +0000 (0:00:03.446) 0:00:26.099 ******** 2026-01-05 01:15:43.071484 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-05 01:15:43.071488 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-05 01:15:43.071492 | orchestrator | 2026-01-05 01:15:43.071497 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-05 01:15:43.071501 | orchestrator | Monday 05 January 2026 01:11:34 +0000 (0:00:07.393) 0:00:33.492 ******** 2026-01-05 01:15:43.071506 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-05 01:15:43.071510 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-05 01:15:43.071514 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-05 01:15:43.071519 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-05 01:15:43.071523 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-05 01:15:43.071527 | orchestrator | 2026-01-05 01:15:43.071532 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 01:15:43.071536 | orchestrator | Monday 05 January 2026 01:11:49 +0000 (0:00:15.519) 0:00:49.012 ******** 2026-01-05 01:15:43.071540 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:15:43.071583 | orchestrator | 2026-01-05 01:15:43.071589 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-05 01:15:43.071593 | orchestrator | Monday 05 January 2026 01:11:50 +0000 (0:00:00.592) 0:00:49.604 ******** 2026-01-05 01:15:43.071598 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.071602 | orchestrator | 2026-01-05 01:15:43.071607 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-05 01:15:43.071611 | orchestrator | Monday 05 January 2026 01:11:55 +0000 (0:00:04.982) 0:00:54.587 ******** 2026-01-05 01:15:43.071615 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.071620 | orchestrator | 2026-01-05 01:15:43.071624 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-05 01:15:43.071636 | orchestrator | Monday 05 January 2026 01:11:59 +0000 (0:00:03.834) 0:00:58.422 ******** 2026-01-05 01:15:43.071641 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.071645 | orchestrator | 2026-01-05 01:15:43.071650 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-05 01:15:43.071654 | orchestrator | Monday 05 January 2026 01:12:02 +0000 (0:00:03.174) 0:01:01.597 ******** 2026-01-05 01:15:43.071659 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-05 01:15:43.071663 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-05 01:15:43.071667 | orchestrator | 2026-01-05 01:15:43.071672 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-05 01:15:43.071676 | orchestrator | Monday 05 January 2026 01:12:12 +0000 (0:00:09.870) 0:01:11.468 ******** 2026-01-05 01:15:43.071681 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-05 01:15:43.071685 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-05 01:15:43.071696 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-05 01:15:43.071701 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-05 01:15:43.071706 | orchestrator | 2026-01-05 01:15:43.071710 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-05 01:15:43.071715 | orchestrator | Monday 05 January 2026 01:12:27 +0000 (0:00:15.565) 0:01:27.033 ******** 2026-01-05 01:15:43.071719 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.071723 | orchestrator | 2026-01-05 01:15:43.071728 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-05 01:15:43.071732 | orchestrator | Monday 05 January 2026 01:12:32 +0000 (0:00:04.158) 0:01:31.192 ******** 2026-01-05 01:15:43.071736 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.071741 | orchestrator | 2026-01-05 01:15:43.071745 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-05 01:15:43.071750 | orchestrator | Monday 05 January 2026 01:12:37 +0000 (0:00:05.406) 0:01:36.598 ******** 2026-01-05 01:15:43.071754 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:43.071758 | orchestrator | 2026-01-05 01:15:43.071766 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-05 01:15:43.071771 | orchestrator | Monday 05 January 2026 01:12:37 +0000 (0:00:00.219) 0:01:36.817 ******** 2026-01-05 01:15:43.071775 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.071779 | orchestrator | 2026-01-05 01:15:43.071784 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 01:15:43.071788 | orchestrator | Monday 05 January 2026 01:12:42 +0000 (0:00:04.258) 0:01:41.075 ******** 2026-01-05 01:15:43.071793 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:15:43.071797 | orchestrator | 2026-01-05 01:15:43.071811 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-05 01:15:43.071816 | orchestrator | Monday 05 January 2026 01:12:43 +0000 (0:00:01.073) 0:01:42.149 ******** 2026-01-05 01:15:43.071822 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.071827 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.071832 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.071838 | orchestrator | 2026-01-05 01:15:43.071843 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-05 01:15:43.071848 | orchestrator | Monday 05 January 2026 01:12:48 +0000 (0:00:05.686) 0:01:47.835 ******** 2026-01-05 01:15:43.071853 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.071858 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.071863 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.071868 | orchestrator | 2026-01-05 01:15:43.071873 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-05 01:15:43.071879 | orchestrator | Monday 05 January 2026 01:12:53 +0000 (0:00:04.882) 0:01:52.717 ******** 2026-01-05 01:15:43.071884 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.071889 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.071894 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.071899 | orchestrator | 2026-01-05 01:15:43.071904 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-05 01:15:43.071909 | orchestrator | Monday 05 January 2026 01:12:54 +0000 (0:00:00.804) 0:01:53.522 ******** 2026-01-05 01:15:43.071914 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:43.071919 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.071925 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:43.071930 | orchestrator | 2026-01-05 01:15:43.071935 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-05 01:15:43.071940 | orchestrator | Monday 05 January 2026 01:12:56 +0000 (0:00:01.951) 0:01:55.473 ******** 2026-01-05 01:15:43.071948 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.071953 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.071959 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.071965 | orchestrator | 2026-01-05 01:15:43.071972 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-05 01:15:43.071981 | orchestrator | Monday 05 January 2026 01:12:57 +0000 (0:00:01.278) 0:01:56.752 ******** 2026-01-05 01:15:43.071993 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.072000 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.072007 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.072014 | orchestrator | 2026-01-05 01:15:43.072021 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-05 01:15:43.072028 | orchestrator | Monday 05 January 2026 01:12:58 +0000 (0:00:01.189) 0:01:57.942 ******** 2026-01-05 01:15:43.072036 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.072042 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.072049 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.072056 | orchestrator | 2026-01-05 01:15:43.072069 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-05 01:15:43.072077 | orchestrator | Monday 05 January 2026 01:13:00 +0000 (0:00:01.985) 0:01:59.927 ******** 2026-01-05 01:15:43.072084 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.072091 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.072098 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.072106 | orchestrator | 2026-01-05 01:15:43.072113 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-05 01:15:43.072120 | orchestrator | Monday 05 January 2026 01:13:02 +0000 (0:00:01.711) 0:02:01.639 ******** 2026-01-05 01:15:43.072128 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.072135 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:43.072142 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:43.072149 | orchestrator | 2026-01-05 01:15:43.072156 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-05 01:15:43.072163 | orchestrator | Monday 05 January 2026 01:13:03 +0000 (0:00:00.620) 0:02:02.260 ******** 2026-01-05 01:15:43.072171 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:43.072178 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:43.072185 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.072193 | orchestrator | 2026-01-05 01:15:43.072200 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 01:15:43.072207 | orchestrator | Monday 05 January 2026 01:13:06 +0000 (0:00:03.652) 0:02:05.912 ******** 2026-01-05 01:15:43.072215 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:15:43.072223 | orchestrator | 2026-01-05 01:15:43.072230 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-05 01:15:43.072238 | orchestrator | Monday 05 January 2026 01:13:07 +0000 (0:00:00.735) 0:02:06.648 ******** 2026-01-05 01:15:43.072244 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.072251 | orchestrator | 2026-01-05 01:15:43.072258 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-05 01:15:43.072266 | orchestrator | Monday 05 January 2026 01:13:11 +0000 (0:00:03.991) 0:02:10.640 ******** 2026-01-05 01:15:43.072273 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.072278 | orchestrator | 2026-01-05 01:15:43.072286 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-05 01:15:43.072293 | orchestrator | Monday 05 January 2026 01:13:14 +0000 (0:00:03.176) 0:02:13.817 ******** 2026-01-05 01:15:43.072415 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-05 01:15:43.072433 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-05 01:15:43.072441 | orchestrator | 2026-01-05 01:15:43.072449 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-05 01:15:43.072465 | orchestrator | Monday 05 January 2026 01:13:21 +0000 (0:00:06.653) 0:02:20.470 ******** 2026-01-05 01:15:43.072472 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.072480 | orchestrator | 2026-01-05 01:15:43.072487 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-05 01:15:43.072494 | orchestrator | Monday 05 January 2026 01:13:24 +0000 (0:00:03.230) 0:02:23.701 ******** 2026-01-05 01:15:43.072501 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:43.072509 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:43.072515 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:43.072522 | orchestrator | 2026-01-05 01:15:43.072530 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-05 01:15:43.072537 | orchestrator | Monday 05 January 2026 01:13:24 +0000 (0:00:00.340) 0:02:24.042 ******** 2026-01-05 01:15:43.072565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.072584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.072592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.072641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.072662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.072671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.072679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.072689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.072704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.072712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.072725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.072738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.072746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.072754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.072766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.072773 | orchestrator | 2026-01-05 01:15:43.072781 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-05 01:15:43.072789 | orchestrator | Monday 05 January 2026 01:13:27 +0000 (0:00:02.444) 0:02:26.487 ******** 2026-01-05 01:15:43.072796 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:43.072804 | orchestrator | 2026-01-05 01:15:43.072811 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-05 01:15:43.072819 | orchestrator | Monday 05 January 2026 01:13:27 +0000 (0:00:00.138) 0:02:26.625 ******** 2026-01-05 01:15:43.072826 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:43.072833 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:43.072840 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:43.072847 | orchestrator | 2026-01-05 01:15:43.072854 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-05 01:15:43.072861 | orchestrator | Monday 05 January 2026 01:13:28 +0000 (0:00:00.503) 0:02:27.128 ******** 2026-01-05 01:15:43.072876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 01:15:43.072888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 01:15:43.072895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.072903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.072911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:15:43.072918 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:43.072930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 01:15:43.072944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 01:15:43.072961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.072969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.072976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:15:43.072984 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:43.072997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 01:15:43.073004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 01:15:43.073016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:15:43.073042 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:43.073050 | orchestrator | 2026-01-05 01:15:43.073056 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 01:15:43.073063 | orchestrator | Monday 05 January 2026 01:13:28 +0000 (0:00:00.689) 0:02:27.818 ******** 2026-01-05 01:15:43.073070 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:15:43.073078 | orchestrator | 2026-01-05 01:15:43.073084 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-05 01:15:43.073091 | orchestrator | Monday 05 January 2026 01:13:29 +0000 (0:00:00.545) 0:02:28.363 ******** 2026-01-05 01:15:43.073098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.073115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'contai2026-01-05 01:15:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:15:43.073331 | orchestrator | ner_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.073351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.073359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.073367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.073375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.073383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.073403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.073410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.073421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.073429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.073436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.073444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.073461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.073469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.073477 | orchestrator | 2026-01-05 01:15:43.073484 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-05 01:15:43.073491 | orchestrator | Monday 05 January 2026 01:13:34 +0000 (0:00:05.127) 0:02:33.491 ******** 2026-01-05 01:15:43.073502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 01:15:43.073510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 01:15:43.073517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:15:43.073565 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:43.073571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 01:15:43.073579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 01:15:43.073583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:15:43.073601 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:43.073609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 01:15:43.073614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 01:15:43.073621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:15:43.073635 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:43.073643 | orchestrator | 2026-01-05 01:15:43.073648 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-05 01:15:43.073652 | orchestrator | Monday 05 January 2026 01:13:35 +0000 (0:00:00.978) 0:02:34.470 ******** 2026-01-05 01:15:43.073657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 01:15:43.073665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 01:15:43.073670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:15:43.073686 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:43.073691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 01:15:43.073702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 01:15:43.073711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:15:43.073728 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:43.073732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 01:15:43.073740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 01:15:43.073745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 01:15:43.073759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:15:43.073763 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:43.073768 | orchestrator | 2026-01-05 01:15:43.073772 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-05 01:15:43.073777 | orchestrator | Monday 05 January 2026 01:13:36 +0000 (0:00:00.937) 0:02:35.408 ******** 2026-01-05 01:15:43.073784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.073789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.073797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.074263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.074280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.074285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.074295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074351 | orchestrator | 2026-01-05 01:15:43.074356 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-05 01:15:43.074360 | orchestrator | Monday 05 January 2026 01:13:41 +0000 (0:00:05.033) 0:02:40.442 ******** 2026-01-05 01:15:43.074364 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-05 01:15:43.074369 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-05 01:15:43.074373 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-05 01:15:43.074377 | orchestrator | 2026-01-05 01:15:43.074382 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-05 01:15:43.074386 | orchestrator | Monday 05 January 2026 01:13:43 +0000 (0:00:01.854) 0:02:42.296 ******** 2026-01-05 01:15:43.074393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.074398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.074405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.074412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.074417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.074421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.074427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074479 | orchestrator | 2026-01-05 01:15:43.074483 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-05 01:15:43.074487 | orchestrator | Monday 05 January 2026 01:13:59 +0000 (0:00:16.643) 0:02:58.939 ******** 2026-01-05 01:15:43.074491 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.074495 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.074500 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.074504 | orchestrator | 2026-01-05 01:15:43.074508 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-05 01:15:43.074514 | orchestrator | Monday 05 January 2026 01:14:01 +0000 (0:00:01.509) 0:03:00.449 ******** 2026-01-05 01:15:43.074519 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-05 01:15:43.074523 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-05 01:15:43.074527 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-05 01:15:43.074531 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-05 01:15:43.074535 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-05 01:15:43.074539 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-05 01:15:43.074578 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-05 01:15:43.074583 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-05 01:15:43.074588 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-05 01:15:43.074592 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-05 01:15:43.074596 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-05 01:15:43.074600 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-05 01:15:43.074604 | orchestrator | 2026-01-05 01:15:43.074608 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-05 01:15:43.074612 | orchestrator | Monday 05 January 2026 01:14:06 +0000 (0:00:05.139) 0:03:05.589 ******** 2026-01-05 01:15:43.074616 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-05 01:15:43.074620 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-05 01:15:43.074624 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-05 01:15:43.074628 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-05 01:15:43.074632 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-05 01:15:43.074637 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-05 01:15:43.074641 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-05 01:15:43.074645 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-05 01:15:43.074649 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-05 01:15:43.074653 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-05 01:15:43.074657 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-05 01:15:43.074661 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-05 01:15:43.074665 | orchestrator | 2026-01-05 01:15:43.074669 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-05 01:15:43.074673 | orchestrator | Monday 05 January 2026 01:14:12 +0000 (0:00:05.734) 0:03:11.324 ******** 2026-01-05 01:15:43.074677 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-05 01:15:43.074681 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-05 01:15:43.074685 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-05 01:15:43.074689 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-05 01:15:43.074694 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-05 01:15:43.074703 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-05 01:15:43.074707 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-05 01:15:43.074712 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-05 01:15:43.074718 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-05 01:15:43.074723 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-05 01:15:43.074728 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-05 01:15:43.074735 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-05 01:15:43.074741 | orchestrator | 2026-01-05 01:15:43.074748 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-05 01:15:43.074754 | orchestrator | Monday 05 January 2026 01:14:17 +0000 (0:00:04.966) 0:03:16.290 ******** 2026-01-05 01:15:43.074760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.074772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.074779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 01:15:43.074786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.074799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.074806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 01:15:43.074812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:15:43.074888 | orchestrator | 2026-01-05 01:15:43.074896 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 01:15:43.074900 | orchestrator | Monday 05 January 2026 01:14:20 +0000 (0:00:03.637) 0:03:19.928 ******** 2026-01-05 01:15:43.074904 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:43.074909 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:43.074913 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:43.074917 | orchestrator | 2026-01-05 01:15:43.074921 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-05 01:15:43.074925 | orchestrator | Monday 05 January 2026 01:14:21 +0000 (0:00:00.329) 0:03:20.257 ******** 2026-01-05 01:15:43.074929 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.074933 | orchestrator | 2026-01-05 01:15:43.074937 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-05 01:15:43.074945 | orchestrator | Monday 05 January 2026 01:14:23 +0000 (0:00:02.211) 0:03:22.468 ******** 2026-01-05 01:15:43.074949 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.074953 | orchestrator | 2026-01-05 01:15:43.074957 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-05 01:15:43.074961 | orchestrator | Monday 05 January 2026 01:14:25 +0000 (0:00:02.169) 0:03:24.638 ******** 2026-01-05 01:15:43.074965 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.074969 | orchestrator | 2026-01-05 01:15:43.074973 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-05 01:15:43.074978 | orchestrator | Monday 05 January 2026 01:14:27 +0000 (0:00:02.240) 0:03:26.878 ******** 2026-01-05 01:15:43.074982 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.074986 | orchestrator | 2026-01-05 01:15:43.074990 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-05 01:15:43.074994 | orchestrator | Monday 05 January 2026 01:14:30 +0000 (0:00:02.681) 0:03:29.560 ******** 2026-01-05 01:15:43.074998 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.075002 | orchestrator | 2026-01-05 01:15:43.075006 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-05 01:15:43.075010 | orchestrator | Monday 05 January 2026 01:14:53 +0000 (0:00:22.873) 0:03:52.434 ******** 2026-01-05 01:15:43.075014 | orchestrator | 2026-01-05 01:15:43.075018 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-05 01:15:43.075022 | orchestrator | Monday 05 January 2026 01:14:53 +0000 (0:00:00.069) 0:03:52.503 ******** 2026-01-05 01:15:43.075026 | orchestrator | 2026-01-05 01:15:43.075030 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-05 01:15:43.075034 | orchestrator | Monday 05 January 2026 01:14:53 +0000 (0:00:00.084) 0:03:52.588 ******** 2026-01-05 01:15:43.075038 | orchestrator | 2026-01-05 01:15:43.075043 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-05 01:15:43.075049 | orchestrator | Monday 05 January 2026 01:14:53 +0000 (0:00:00.069) 0:03:52.657 ******** 2026-01-05 01:15:43.075053 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.075057 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.075062 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.075066 | orchestrator | 2026-01-05 01:15:43.075070 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-05 01:15:43.075074 | orchestrator | Monday 05 January 2026 01:15:10 +0000 (0:00:16.776) 0:04:09.433 ******** 2026-01-05 01:15:43.075078 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.075082 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.075086 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.075090 | orchestrator | 2026-01-05 01:15:43.075094 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-05 01:15:43.075098 | orchestrator | Monday 05 January 2026 01:15:16 +0000 (0:00:06.443) 0:04:15.877 ******** 2026-01-05 01:15:43.075102 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.075106 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.075111 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.075115 | orchestrator | 2026-01-05 01:15:43.075119 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-05 01:15:43.075123 | orchestrator | Monday 05 January 2026 01:15:25 +0000 (0:00:08.611) 0:04:24.488 ******** 2026-01-05 01:15:43.075127 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.075131 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.075135 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.075139 | orchestrator | 2026-01-05 01:15:43.075143 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-05 01:15:43.075147 | orchestrator | Monday 05 January 2026 01:15:30 +0000 (0:00:05.337) 0:04:29.825 ******** 2026-01-05 01:15:43.075151 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:15:43.075155 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:15:43.075159 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:15:43.075167 | orchestrator | 2026-01-05 01:15:43.075171 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:15:43.075176 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:15:43.075184 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:15:43.075188 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:15:43.075193 | orchestrator | 2026-01-05 01:15:43.075197 | orchestrator | 2026-01-05 01:15:43.075201 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:15:43.075205 | orchestrator | Monday 05 January 2026 01:15:41 +0000 (0:00:10.728) 0:04:40.553 ******** 2026-01-05 01:15:43.075209 | orchestrator | =============================================================================== 2026-01-05 01:15:43.075213 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.87s 2026-01-05 01:15:43.075217 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.78s 2026-01-05 01:15:43.075221 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.64s 2026-01-05 01:15:43.075225 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.57s 2026-01-05 01:15:43.075229 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.52s 2026-01-05 01:15:43.075233 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.73s 2026-01-05 01:15:43.075237 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.87s 2026-01-05 01:15:43.075241 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.61s 2026-01-05 01:15:43.075245 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.02s 2026-01-05 01:15:43.075249 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.39s 2026-01-05 01:15:43.075253 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.65s 2026-01-05 01:15:43.075258 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.49s 2026-01-05 01:15:43.075262 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.44s 2026-01-05 01:15:43.075266 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.73s 2026-01-05 01:15:43.075270 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.69s 2026-01-05 01:15:43.075274 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.41s 2026-01-05 01:15:43.075278 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.34s 2026-01-05 01:15:43.075282 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.14s 2026-01-05 01:15:43.075286 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.13s 2026-01-05 01:15:43.075290 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.03s 2026-01-05 01:15:46.111880 | orchestrator | 2026-01-05 01:15:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:15:49.157005 | orchestrator | 2026-01-05 01:15:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:15:52.200937 | orchestrator | 2026-01-05 01:15:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:15:55.244483 | orchestrator | 2026-01-05 01:15:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:15:58.291421 | orchestrator | 2026-01-05 01:15:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:01.336097 | orchestrator | 2026-01-05 01:16:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:04.385469 | orchestrator | 2026-01-05 01:16:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:07.433874 | orchestrator | 2026-01-05 01:16:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:10.471241 | orchestrator | 2026-01-05 01:16:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:13.509996 | orchestrator | 2026-01-05 01:16:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:16.550733 | orchestrator | 2026-01-05 01:16:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:19.587970 | orchestrator | 2026-01-05 01:16:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:22.635157 | orchestrator | 2026-01-05 01:16:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:25.676903 | orchestrator | 2026-01-05 01:16:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:28.724033 | orchestrator | 2026-01-05 01:16:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:31.767308 | orchestrator | 2026-01-05 01:16:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:34.807930 | orchestrator | 2026-01-05 01:16:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:37.845237 | orchestrator | 2026-01-05 01:16:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:40.882249 | orchestrator | 2026-01-05 01:16:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-05 01:16:43.930143 | orchestrator | 2026-01-05 01:16:44.324992 | orchestrator | 2026-01-05 01:16:44.334092 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Jan 5 01:16:44 UTC 2026 2026-01-05 01:16:44.334210 | orchestrator | 2026-01-05 01:16:44.794660 | orchestrator | ok: Runtime: 0:37:16.560019 2026-01-05 01:16:45.059217 | 2026-01-05 01:16:45.059367 | TASK [Bootstrap services] 2026-01-05 01:16:45.954693 | orchestrator | 2026-01-05 01:16:45.954882 | orchestrator | # BOOTSTRAP 2026-01-05 01:16:45.954904 | orchestrator | 2026-01-05 01:16:45.954912 | orchestrator | + set -e 2026-01-05 01:16:45.954920 | orchestrator | + echo 2026-01-05 01:16:45.954929 | orchestrator | + echo '# BOOTSTRAP' 2026-01-05 01:16:45.954940 | orchestrator | + echo 2026-01-05 01:16:45.954972 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-05 01:16:45.966097 | orchestrator | + set -e 2026-01-05 01:16:45.966207 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-05 01:16:51.403833 | orchestrator | 2026-01-05 01:16:51 | INFO  | It takes a moment until task 55ff7869-1e55-46dd-bc60-92b92cbc4307 (flavor-manager) has been started and output is visible here. 2026-01-05 01:16:59.205683 | orchestrator | 2026-01-05 01:16:54 | INFO  | Flavor SCS-1L-1 created 2026-01-05 01:16:59.205752 | orchestrator | 2026-01-05 01:16:54 | INFO  | Flavor SCS-1L-1-5 created 2026-01-05 01:16:59.205765 | orchestrator | 2026-01-05 01:16:55 | INFO  | Flavor SCS-1V-2 created 2026-01-05 01:16:59.205772 | orchestrator | 2026-01-05 01:16:55 | INFO  | Flavor SCS-1V-2-5 created 2026-01-05 01:16:59.205779 | orchestrator | 2026-01-05 01:16:55 | INFO  | Flavor SCS-1V-4 created 2026-01-05 01:16:59.205786 | orchestrator | 2026-01-05 01:16:55 | INFO  | Flavor SCS-1V-4-10 created 2026-01-05 01:16:59.205792 | orchestrator | 2026-01-05 01:16:55 | INFO  | Flavor SCS-1V-8 created 2026-01-05 01:16:59.205800 | orchestrator | 2026-01-05 01:16:55 | INFO  | Flavor SCS-1V-8-20 created 2026-01-05 01:16:59.205811 | orchestrator | 2026-01-05 01:16:55 | INFO  | Flavor SCS-2V-4 created 2026-01-05 01:16:59.205816 | orchestrator | 2026-01-05 01:16:56 | INFO  | Flavor SCS-2V-4-10 created 2026-01-05 01:16:59.205820 | orchestrator | 2026-01-05 01:16:56 | INFO  | Flavor SCS-2V-8 created 2026-01-05 01:16:59.205824 | orchestrator | 2026-01-05 01:16:56 | INFO  | Flavor SCS-2V-8-20 created 2026-01-05 01:16:59.205828 | orchestrator | 2026-01-05 01:16:56 | INFO  | Flavor SCS-2V-16 created 2026-01-05 01:16:59.205832 | orchestrator | 2026-01-05 01:16:56 | INFO  | Flavor SCS-2V-16-50 created 2026-01-05 01:16:59.205836 | orchestrator | 2026-01-05 01:16:56 | INFO  | Flavor SCS-4V-8 created 2026-01-05 01:16:59.205840 | orchestrator | 2026-01-05 01:16:57 | INFO  | Flavor SCS-4V-8-20 created 2026-01-05 01:16:59.205844 | orchestrator | 2026-01-05 01:16:57 | INFO  | Flavor SCS-4V-16 created 2026-01-05 01:16:59.205847 | orchestrator | 2026-01-05 01:16:57 | INFO  | Flavor SCS-4V-16-50 created 2026-01-05 01:16:59.205851 | orchestrator | 2026-01-05 01:16:57 | INFO  | Flavor SCS-4V-32 created 2026-01-05 01:16:59.205855 | orchestrator | 2026-01-05 01:16:57 | INFO  | Flavor SCS-4V-32-100 created 2026-01-05 01:16:59.205859 | orchestrator | 2026-01-05 01:16:57 | INFO  | Flavor SCS-8V-16 created 2026-01-05 01:16:59.205864 | orchestrator | 2026-01-05 01:16:57 | INFO  | Flavor SCS-8V-16-50 created 2026-01-05 01:16:59.205871 | orchestrator | 2026-01-05 01:16:58 | INFO  | Flavor SCS-8V-32 created 2026-01-05 01:16:59.205876 | orchestrator | 2026-01-05 01:16:58 | INFO  | Flavor SCS-8V-32-100 created 2026-01-05 01:16:59.205879 | orchestrator | 2026-01-05 01:16:58 | INFO  | Flavor SCS-16V-32 created 2026-01-05 01:16:59.205883 | orchestrator | 2026-01-05 01:16:58 | INFO  | Flavor SCS-16V-32-100 created 2026-01-05 01:16:59.205887 | orchestrator | 2026-01-05 01:16:58 | INFO  | Flavor SCS-2V-4-20s created 2026-01-05 01:16:59.205891 | orchestrator | 2026-01-05 01:16:58 | INFO  | Flavor SCS-4V-8-50s created 2026-01-05 01:16:59.205895 | orchestrator | 2026-01-05 01:16:58 | INFO  | Flavor SCS-8V-32-100s created 2026-01-05 01:17:01.731551 | orchestrator | 2026-01-05 01:17:01 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-05 01:17:11.854936 | orchestrator | 2026-01-05 01:17:11 | INFO  | Task 5dcb971f-d808-4936-8e4b-59fcfa5af4a5 (bootstrap-basic) was prepared for execution. 2026-01-05 01:17:11.855022 | orchestrator | 2026-01-05 01:17:11 | INFO  | It takes a moment until task 5dcb971f-d808-4936-8e4b-59fcfa5af4a5 (bootstrap-basic) has been started and output is visible here. 2026-01-05 01:17:59.462128 | orchestrator | 2026-01-05 01:17:59.462217 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-05 01:17:59.462224 | orchestrator | 2026-01-05 01:17:59.462228 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 01:17:59.462233 | orchestrator | Monday 05 January 2026 01:17:16 +0000 (0:00:00.075) 0:00:00.075 ******** 2026-01-05 01:17:59.462238 | orchestrator | ok: [localhost] 2026-01-05 01:17:59.462243 | orchestrator | 2026-01-05 01:17:59.462247 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-05 01:17:59.462251 | orchestrator | Monday 05 January 2026 01:17:18 +0000 (0:00:01.954) 0:00:02.030 ******** 2026-01-05 01:17:59.462255 | orchestrator | ok: [localhost] 2026-01-05 01:17:59.462259 | orchestrator | 2026-01-05 01:17:59.462263 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-05 01:17:59.462276 | orchestrator | Monday 05 January 2026 01:17:27 +0000 (0:00:09.537) 0:00:11.567 ******** 2026-01-05 01:17:59.462280 | orchestrator | changed: [localhost] 2026-01-05 01:17:59.462284 | orchestrator | 2026-01-05 01:17:59.462288 | orchestrator | TASK [Create public network] *************************************************** 2026-01-05 01:17:59.462293 | orchestrator | Monday 05 January 2026 01:17:35 +0000 (0:00:07.535) 0:00:19.103 ******** 2026-01-05 01:17:59.462297 | orchestrator | changed: [localhost] 2026-01-05 01:17:59.462300 | orchestrator | 2026-01-05 01:17:59.462304 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-05 01:17:59.462308 | orchestrator | Monday 05 January 2026 01:17:40 +0000 (0:00:05.355) 0:00:24.458 ******** 2026-01-05 01:17:59.462315 | orchestrator | changed: [localhost] 2026-01-05 01:17:59.462320 | orchestrator | 2026-01-05 01:17:59.462325 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-05 01:17:59.462332 | orchestrator | Monday 05 January 2026 01:17:47 +0000 (0:00:06.441) 0:00:30.900 ******** 2026-01-05 01:17:59.462338 | orchestrator | changed: [localhost] 2026-01-05 01:17:59.462344 | orchestrator | 2026-01-05 01:17:59.462350 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-05 01:17:59.462357 | orchestrator | Monday 05 January 2026 01:17:51 +0000 (0:00:04.343) 0:00:35.244 ******** 2026-01-05 01:17:59.462363 | orchestrator | changed: [localhost] 2026-01-05 01:17:59.462369 | orchestrator | 2026-01-05 01:17:59.462375 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-05 01:17:59.462390 | orchestrator | Monday 05 January 2026 01:17:55 +0000 (0:00:03.890) 0:00:39.135 ******** 2026-01-05 01:17:59.462397 | orchestrator | ok: [localhost] 2026-01-05 01:17:59.462403 | orchestrator | 2026-01-05 01:17:59.462409 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:17:59.462416 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:17:59.462423 | orchestrator | 2026-01-05 01:17:59.462430 | orchestrator | 2026-01-05 01:17:59.462436 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:17:59.462442 | orchestrator | Monday 05 January 2026 01:17:59 +0000 (0:00:03.635) 0:00:42.770 ******** 2026-01-05 01:17:59.462449 | orchestrator | =============================================================================== 2026-01-05 01:17:59.462456 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.54s 2026-01-05 01:17:59.462463 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.54s 2026-01-05 01:17:59.462470 | orchestrator | Set public network to default ------------------------------------------- 6.44s 2026-01-05 01:17:59.462476 | orchestrator | Create public network --------------------------------------------------- 5.36s 2026-01-05 01:17:59.462504 | orchestrator | Create public subnet ---------------------------------------------------- 4.34s 2026-01-05 01:17:59.462510 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.89s 2026-01-05 01:17:59.462516 | orchestrator | Create manager role ----------------------------------------------------- 3.64s 2026-01-05 01:17:59.462523 | orchestrator | Gathering Facts --------------------------------------------------------- 1.95s 2026-01-05 01:18:02.022608 | orchestrator | 2026-01-05 01:18:02 | INFO  | It takes a moment until task c1d723be-b882-4a4e-97bd-860c6a3d7928 (image-manager) has been started and output is visible here. 2026-01-05 01:18:42.046359 | orchestrator | 2026-01-05 01:18:04 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-05 01:18:42.046467 | orchestrator | 2026-01-05 01:18:05 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-05 01:18:42.046477 | orchestrator | 2026-01-05 01:18:05 | INFO  | Importing image Cirros 0.6.2 2026-01-05 01:18:42.046482 | orchestrator | 2026-01-05 01:18:05 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-05 01:18:42.046488 | orchestrator | 2026-01-05 01:18:06 | INFO  | Waiting for image to leave queued state... 2026-01-05 01:18:42.046494 | orchestrator | 2026-01-05 01:18:08 | INFO  | Waiting for import to complete... 2026-01-05 01:18:42.046498 | orchestrator | 2026-01-05 01:18:19 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-05 01:18:42.046503 | orchestrator | 2026-01-05 01:18:19 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-05 01:18:42.046507 | orchestrator | 2026-01-05 01:18:19 | INFO  | Setting internal_version = 0.6.2 2026-01-05 01:18:42.046511 | orchestrator | 2026-01-05 01:18:19 | INFO  | Setting image_original_user = cirros 2026-01-05 01:18:42.046516 | orchestrator | 2026-01-05 01:18:19 | INFO  | Adding tag os:cirros 2026-01-05 01:18:42.046520 | orchestrator | 2026-01-05 01:18:19 | INFO  | Setting property architecture: x86_64 2026-01-05 01:18:42.046524 | orchestrator | 2026-01-05 01:18:20 | INFO  | Setting property hw_disk_bus: scsi 2026-01-05 01:18:42.046528 | orchestrator | 2026-01-05 01:18:20 | INFO  | Setting property hw_rng_model: virtio 2026-01-05 01:18:42.046532 | orchestrator | 2026-01-05 01:18:20 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-05 01:18:42.046537 | orchestrator | 2026-01-05 01:18:20 | INFO  | Setting property hw_watchdog_action: reset 2026-01-05 01:18:42.046540 | orchestrator | 2026-01-05 01:18:20 | INFO  | Setting property hypervisor_type: qemu 2026-01-05 01:18:42.046544 | orchestrator | 2026-01-05 01:18:21 | INFO  | Setting property os_distro: cirros 2026-01-05 01:18:42.046548 | orchestrator | 2026-01-05 01:18:21 | INFO  | Setting property os_purpose: minimal 2026-01-05 01:18:42.046552 | orchestrator | 2026-01-05 01:18:21 | INFO  | Setting property replace_frequency: never 2026-01-05 01:18:42.046556 | orchestrator | 2026-01-05 01:18:21 | INFO  | Setting property uuid_validity: none 2026-01-05 01:18:42.046560 | orchestrator | 2026-01-05 01:18:21 | INFO  | Setting property provided_until: none 2026-01-05 01:18:42.046564 | orchestrator | 2026-01-05 01:18:22 | INFO  | Setting property image_description: Cirros 2026-01-05 01:18:42.046568 | orchestrator | 2026-01-05 01:18:22 | INFO  | Setting property image_name: Cirros 2026-01-05 01:18:42.046571 | orchestrator | 2026-01-05 01:18:22 | INFO  | Setting property internal_version: 0.6.2 2026-01-05 01:18:42.046575 | orchestrator | 2026-01-05 01:18:22 | INFO  | Setting property image_original_user: cirros 2026-01-05 01:18:42.046596 | orchestrator | 2026-01-05 01:18:23 | INFO  | Setting property os_version: 0.6.2 2026-01-05 01:18:42.046607 | orchestrator | 2026-01-05 01:18:23 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-05 01:18:42.046612 | orchestrator | 2026-01-05 01:18:23 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-05 01:18:42.046616 | orchestrator | 2026-01-05 01:18:23 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-05 01:18:42.046620 | orchestrator | 2026-01-05 01:18:23 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-05 01:18:42.046624 | orchestrator | 2026-01-05 01:18:23 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-05 01:18:42.046628 | orchestrator | 2026-01-05 01:18:23 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-05 01:18:42.046636 | orchestrator | 2026-01-05 01:18:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-05 01:18:42.046640 | orchestrator | 2026-01-05 01:18:24 | INFO  | Importing image Cirros 0.6.3 2026-01-05 01:18:42.046644 | orchestrator | 2026-01-05 01:18:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-05 01:18:42.046648 | orchestrator | 2026-01-05 01:18:24 | INFO  | Waiting for image to leave queued state... 2026-01-05 01:18:42.046652 | orchestrator | 2026-01-05 01:18:26 | INFO  | Waiting for import to complete... 2026-01-05 01:18:42.046668 | orchestrator | 2026-01-05 01:18:36 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-05 01:18:42.046673 | orchestrator | 2026-01-05 01:18:37 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-05 01:18:42.046676 | orchestrator | 2026-01-05 01:18:37 | INFO  | Setting internal_version = 0.6.3 2026-01-05 01:18:42.046680 | orchestrator | 2026-01-05 01:18:37 | INFO  | Setting image_original_user = cirros 2026-01-05 01:18:42.046684 | orchestrator | 2026-01-05 01:18:37 | INFO  | Adding tag os:cirros 2026-01-05 01:18:42.046688 | orchestrator | 2026-01-05 01:18:37 | INFO  | Setting property architecture: x86_64 2026-01-05 01:18:42.046692 | orchestrator | 2026-01-05 01:18:37 | INFO  | Setting property hw_disk_bus: scsi 2026-01-05 01:18:42.046696 | orchestrator | 2026-01-05 01:18:37 | INFO  | Setting property hw_rng_model: virtio 2026-01-05 01:18:42.046699 | orchestrator | 2026-01-05 01:18:37 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-05 01:18:42.046703 | orchestrator | 2026-01-05 01:18:38 | INFO  | Setting property hw_watchdog_action: reset 2026-01-05 01:18:42.046707 | orchestrator | 2026-01-05 01:18:38 | INFO  | Setting property hypervisor_type: qemu 2026-01-05 01:18:42.046712 | orchestrator | 2026-01-05 01:18:38 | INFO  | Setting property os_distro: cirros 2026-01-05 01:18:42.046715 | orchestrator | 2026-01-05 01:18:38 | INFO  | Setting property os_purpose: minimal 2026-01-05 01:18:42.046719 | orchestrator | 2026-01-05 01:18:38 | INFO  | Setting property replace_frequency: never 2026-01-05 01:18:42.046724 | orchestrator | 2026-01-05 01:18:39 | INFO  | Setting property uuid_validity: none 2026-01-05 01:18:42.046727 | orchestrator | 2026-01-05 01:18:39 | INFO  | Setting property provided_until: none 2026-01-05 01:18:42.046731 | orchestrator | 2026-01-05 01:18:39 | INFO  | Setting property image_description: Cirros 2026-01-05 01:18:42.046735 | orchestrator | 2026-01-05 01:18:39 | INFO  | Setting property image_name: Cirros 2026-01-05 01:18:42.046739 | orchestrator | 2026-01-05 01:18:40 | INFO  | Setting property internal_version: 0.6.3 2026-01-05 01:18:42.046747 | orchestrator | 2026-01-05 01:18:40 | INFO  | Setting property image_original_user: cirros 2026-01-05 01:18:42.046751 | orchestrator | 2026-01-05 01:18:40 | INFO  | Setting property os_version: 0.6.3 2026-01-05 01:18:42.046755 | orchestrator | 2026-01-05 01:18:40 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-05 01:18:42.046759 | orchestrator | 2026-01-05 01:18:40 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-05 01:18:42.046763 | orchestrator | 2026-01-05 01:18:41 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-05 01:18:42.046766 | orchestrator | 2026-01-05 01:18:41 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-05 01:18:42.046770 | orchestrator | 2026-01-05 01:18:41 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-05 01:18:42.405909 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-05 01:18:44.850068 | orchestrator | 2026-01-05 01:18:44 | INFO  | date: 2026-01-04 2026-01-05 01:18:44.850195 | orchestrator | 2026-01-05 01:18:44 | INFO  | image: octavia-amphora-haproxy-2024.2.20260104.qcow2 2026-01-05 01:18:44.850235 | orchestrator | 2026-01-05 01:18:44 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2 2026-01-05 01:18:44.850249 | orchestrator | 2026-01-05 01:18:44 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2.CHECKSUM 2026-01-05 01:18:45.067395 | orchestrator | 2026-01-05 01:18:45 | INFO  | checksum: efe91d4646b3899561e95b1c77d6d6bc98459aee738b3292e0742e3de3cdee03 2026-01-05 01:18:45.140767 | orchestrator | 2026-01-05 01:18:45 | INFO  | It takes a moment until task d849d52f-44b2-4579-95e4-6511a0038d2f (image-manager) has been started and output is visible here. 2026-01-05 01:20:16.718616 | orchestrator | 2026-01-05 01:18:47 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-04' 2026-01-05 01:20:16.718836 | orchestrator | 2026-01-05 01:18:47 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2: 200 2026-01-05 01:20:16.718866 | orchestrator | 2026-01-05 01:18:47 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-04 2026-01-05 01:20:16.718881 | orchestrator | 2026-01-05 01:18:47 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2 2026-01-05 01:20:16.718893 | orchestrator | 2026-01-05 01:18:48 | INFO  | Waiting for image to leave queued state... 2026-01-05 01:20:16.718904 | orchestrator | 2026-01-05 01:18:50 | INFO  | Waiting for import to complete... 2026-01-05 01:20:16.718914 | orchestrator | 2026-01-05 01:19:01 | INFO  | Waiting for import to complete... 2026-01-05 01:20:16.718924 | orchestrator | 2026-01-05 01:19:11 | INFO  | Waiting for import to complete... 2026-01-05 01:20:16.718934 | orchestrator | 2026-01-05 01:19:21 | INFO  | Waiting for import to complete... 2026-01-05 01:20:16.718945 | orchestrator | 2026-01-05 01:19:31 | INFO  | Waiting for import to complete... 2026-01-05 01:20:16.718957 | orchestrator | 2026-01-05 01:19:41 | INFO  | Waiting for import to complete... 2026-01-05 01:20:16.718966 | orchestrator | 2026-01-05 01:19:51 | INFO  | Waiting for import to complete... 2026-01-05 01:20:16.718976 | orchestrator | 2026-01-05 01:20:01 | INFO  | Waiting for import to complete... 2026-01-05 01:20:16.718986 | orchestrator | 2026-01-05 01:20:11 | INFO  | Import of 'OpenStack Octavia Amphora 2026-01-04' successfully completed, reloading images 2026-01-05 01:20:16.719026 | orchestrator | 2026-01-05 01:20:12 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-01-04' 2026-01-05 01:20:16.719037 | orchestrator | 2026-01-05 01:20:12 | INFO  | Setting internal_version = 2026-01-04 2026-01-05 01:20:16.719048 | orchestrator | 2026-01-05 01:20:12 | INFO  | Setting image_original_user = ubuntu 2026-01-05 01:20:16.719065 | orchestrator | 2026-01-05 01:20:12 | INFO  | Adding tag amphora 2026-01-05 01:20:16.719082 | orchestrator | 2026-01-05 01:20:12 | INFO  | Adding tag os:ubuntu 2026-01-05 01:20:16.719098 | orchestrator | 2026-01-05 01:20:12 | INFO  | Setting property architecture: x86_64 2026-01-05 01:20:16.719115 | orchestrator | 2026-01-05 01:20:12 | INFO  | Setting property hw_disk_bus: scsi 2026-01-05 01:20:16.719132 | orchestrator | 2026-01-05 01:20:13 | INFO  | Setting property hw_rng_model: virtio 2026-01-05 01:20:16.719149 | orchestrator | 2026-01-05 01:20:13 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-05 01:20:16.719162 | orchestrator | 2026-01-05 01:20:13 | INFO  | Setting property hw_watchdog_action: reset 2026-01-05 01:20:16.719172 | orchestrator | 2026-01-05 01:20:13 | INFO  | Setting property hypervisor_type: qemu 2026-01-05 01:20:16.719181 | orchestrator | 2026-01-05 01:20:13 | INFO  | Setting property os_distro: ubuntu 2026-01-05 01:20:16.719191 | orchestrator | 2026-01-05 01:20:14 | INFO  | Setting property replace_frequency: quarterly 2026-01-05 01:20:16.719201 | orchestrator | 2026-01-05 01:20:14 | INFO  | Setting property uuid_validity: last-1 2026-01-05 01:20:16.719211 | orchestrator | 2026-01-05 01:20:14 | INFO  | Setting property provided_until: none 2026-01-05 01:20:16.719236 | orchestrator | 2026-01-05 01:20:14 | INFO  | Setting property os_purpose: network 2026-01-05 01:20:16.719246 | orchestrator | 2026-01-05 01:20:14 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-01-05 01:20:16.719256 | orchestrator | 2026-01-05 01:20:15 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-01-05 01:20:16.719266 | orchestrator | 2026-01-05 01:20:15 | INFO  | Setting property internal_version: 2026-01-04 2026-01-05 01:20:16.719278 | orchestrator | 2026-01-05 01:20:15 | INFO  | Setting property image_original_user: ubuntu 2026-01-05 01:20:16.719295 | orchestrator | 2026-01-05 01:20:15 | INFO  | Setting property os_version: 2026-01-04 2026-01-05 01:20:16.719311 | orchestrator | 2026-01-05 01:20:16 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2 2026-01-05 01:20:16.719327 | orchestrator | 2026-01-05 01:20:16 | INFO  | Setting property image_build_date: 2026-01-04 2026-01-05 01:20:16.719367 | orchestrator | 2026-01-05 01:20:16 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-01-04' 2026-01-05 01:20:16.719386 | orchestrator | 2026-01-05 01:20:16 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-01-04' 2026-01-05 01:20:16.719403 | orchestrator | 2026-01-05 01:20:16 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-05 01:20:16.719420 | orchestrator | 2026-01-05 01:20:16 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-05 01:20:16.719437 | orchestrator | 2026-01-05 01:20:16 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-05 01:20:16.719454 | orchestrator | 2026-01-05 01:20:16 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-05 01:20:17.249713 | orchestrator | ok: Runtime: 0:03:31.536805 2026-01-05 01:20:17.271397 | 2026-01-05 01:20:17.271567 | TASK [Run checks] 2026-01-05 01:20:18.019860 | orchestrator | + set -e 2026-01-05 01:20:18.020063 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 01:20:18.020087 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 01:20:18.020106 | orchestrator | ++ INTERACTIVE=false 2026-01-05 01:20:18.020119 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 01:20:18.020130 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 01:20:18.020143 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-05 01:20:18.020935 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-05 01:20:18.026890 | orchestrator | 2026-01-05 01:20:18.027010 | orchestrator | # CHECK 2026-01-05 01:20:18.027029 | orchestrator | 2026-01-05 01:20:18.027043 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 01:20:18.027060 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 01:20:18.027073 | orchestrator | + echo 2026-01-05 01:20:18.027084 | orchestrator | + echo '# CHECK' 2026-01-05 01:20:18.027101 | orchestrator | + echo 2026-01-05 01:20:18.027124 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-05 01:20:18.027821 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 01:20:18.090080 | orchestrator | 2026-01-05 01:20:18.090175 | orchestrator | ## Containers @ testbed-manager 2026-01-05 01:20:18.090187 | orchestrator | 2026-01-05 01:20:18.090198 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 01:20:18.090207 | orchestrator | + echo 2026-01-05 01:20:18.090216 | orchestrator | + echo '## Containers @ testbed-manager' 2026-01-05 01:20:18.090225 | orchestrator | + echo 2026-01-05 01:20:18.090234 | orchestrator | + osism container testbed-manager ps 2026-01-05 01:20:20.192581 | orchestrator | 2026-01-05 01:20:20 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-01-05 01:20:20.598449 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-05 01:20:20.598621 | orchestrator | 4965ac8e260b registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_blackbox_exporter 2026-01-05 01:20:20.598651 | orchestrator | 6ff14eafecaf registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_alertmanager 2026-01-05 01:20:20.598664 | orchestrator | 0b28fee9b760 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-01-05 01:20:20.598683 | orchestrator | 69290cf51619 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-01-05 01:20:20.598695 | orchestrator | 221473121d06 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_server 2026-01-05 01:20:20.598713 | orchestrator | ce832c58d0a5 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 20 minutes ago Up 19 minutes cephclient 2026-01-05 01:20:20.598759 | orchestrator | 6e79ac10f403 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-01-05 01:20:20.598772 | orchestrator | d8b3aec9759d registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-01-05 01:20:20.598814 | orchestrator | 8e8b34626e1a registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-01-05 01:20:20.598826 | orchestrator | cb5575485ebf phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 34 minutes ago Up 33 minutes (healthy) 80/tcp phpmyadmin 2026-01-05 01:20:20.598838 | orchestrator | f13605bf2eb1 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 34 minutes ago Up 34 minutes openstackclient 2026-01-05 01:20:20.598849 | orchestrator | 57c7ec6ea188 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 34 minutes ago Up 34 minutes (healthy) 8080/tcp homer 2026-01-05 01:20:20.598861 | orchestrator | 67ba3557c329 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 58 minutes ago Up 57 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-01-05 01:20:20.598878 | orchestrator | 449a0e63832b registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" About an hour ago Up 41 minutes (healthy) manager-inventory_reconciler-1 2026-01-05 01:20:20.598914 | orchestrator | aecf80eababe registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-ansible 2026-01-05 01:20:20.598926 | orchestrator | f95f4ee86159 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) kolla-ansible 2026-01-05 01:20:20.598938 | orchestrator | 5144aabed9ad registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-kubernetes 2026-01-05 01:20:20.598949 | orchestrator | 58d82fe8d07e registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) ceph-ansible 2026-01-05 01:20:20.598961 | orchestrator | d3eb9311dac3 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 42 minutes (healthy) 8000/tcp manager-ara-server-1 2026-01-05 01:20:20.598973 | orchestrator | 7532d548d513 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 3306/tcp manager-mariadb-1 2026-01-05 01:20:20.598984 | orchestrator | 4e4319438d44 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" About an hour ago Up 42 minutes (healthy) osismclient 2026-01-05 01:20:20.598996 | orchestrator | 8465796d360f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-listener-1 2026-01-05 01:20:20.599014 | orchestrator | 4522d7682e2f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-beat-1 2026-01-05 01:20:20.599026 | orchestrator | 7d4d0bb7a2ca registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 6379/tcp manager-redis-1 2026-01-05 01:20:20.599038 | orchestrator | deafdb8688ac registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-01-05 01:20:20.599049 | orchestrator | 57b209ac93db registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-openstack-1 2026-01-05 01:20:20.599061 | orchestrator | 1a96cf54272b registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" About an hour ago Up 42 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-01-05 01:20:20.599077 | orchestrator | 76d6c7b66266 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-flower-1 2026-01-05 01:20:20.599089 | orchestrator | 5fdfb62d66f1 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-01-05 01:20:20.972895 | orchestrator | 2026-01-05 01:20:20.973090 | orchestrator | ## Images @ testbed-manager 2026-01-05 01:20:20.973170 | orchestrator | 2026-01-05 01:20:20.973207 | orchestrator | + echo 2026-01-05 01:20:20.973220 | orchestrator | + echo '## Images @ testbed-manager' 2026-01-05 01:20:20.973233 | orchestrator | + echo 2026-01-05 01:20:20.973246 | orchestrator | + osism container testbed-manager images 2026-01-05 01:20:23.387764 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-05 01:20:23.387877 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 f5a6cc51123f 22 hours ago 238MB 2026-01-05 01:20:23.387890 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 weeks ago 11.5MB 2026-01-05 01:20:23.387899 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 5 weeks ago 608MB 2026-01-05 01:20:23.387909 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-05 01:20:23.387919 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-05 01:20:23.387928 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-05 01:20:23.387937 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 5 weeks ago 308MB 2026-01-05 01:20:23.387946 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-05 01:20:23.387955 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 5 weeks ago 404MB 2026-01-05 01:20:23.387990 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 5 weeks ago 839MB 2026-01-05 01:20:23.388005 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-05 01:20:23.388031 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 5 weeks ago 330MB 2026-01-05 01:20:23.388045 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 5 weeks ago 613MB 2026-01-05 01:20:23.388059 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 5 weeks ago 560MB 2026-01-05 01:20:23.388073 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 5 weeks ago 1.23GB 2026-01-05 01:20:23.388087 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 5 weeks ago 383MB 2026-01-05 01:20:23.388100 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 5 weeks ago 238MB 2026-01-05 01:20:23.388114 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 7 weeks ago 334MB 2026-01-05 01:20:23.388129 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine 13105d2858de 2 months ago 41.4MB 2026-01-05 01:20:23.388144 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 2 months ago 742MB 2026-01-05 01:20:23.388159 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 months ago 275MB 2026-01-05 01:20:23.388173 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 5 months ago 226MB 2026-01-05 01:20:23.388187 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 8 months ago 453MB 2026-01-05 01:20:23.388202 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 19 months ago 146MB 2026-01-05 01:20:23.711197 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-05 01:20:23.711485 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 01:20:23.782114 | orchestrator | 2026-01-05 01:20:23.782242 | orchestrator | ## Containers @ testbed-node-0 2026-01-05 01:20:23.782266 | orchestrator | 2026-01-05 01:20:23.782281 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 01:20:23.782296 | orchestrator | + echo 2026-01-05 01:20:23.782312 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-01-05 01:20:23.782330 | orchestrator | + echo 2026-01-05 01:20:23.782345 | orchestrator | + osism container testbed-node-0 ps 2026-01-05 01:20:26.303242 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-05 01:20:26.303397 | orchestrator | 4352829d5458 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-05 01:20:26.303414 | orchestrator | 2f3982e202cf registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-05 01:20:26.303425 | orchestrator | 222ade08d471 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-01-05 01:20:26.303435 | orchestrator | 8888c12e13f2 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-05 01:20:26.303446 | orchestrator | fab9c5ebbbf4 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-05 01:20:26.303485 | orchestrator | 4d1e016e7ccd registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-01-05 01:20:26.303496 | orchestrator | 30776d89bf48 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) magnum_api 2026-01-05 01:20:26.303507 | orchestrator | 5e3682ebf2d1 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-01-05 01:20:26.303518 | orchestrator | 35cb5c61bdc5 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) placement_api 2026-01-05 01:20:26.303544 | orchestrator | a1a840f2cdfd registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_novncproxy 2026-01-05 01:20:26.303565 | orchestrator | 0ebae66d31db registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) nova_conductor 2026-01-05 01:20:26.303576 | orchestrator | 340a73ba1a28 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2026-01-05 01:20:26.303587 | orchestrator | 5c01381ad5de registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_worker 2026-01-05 01:20:26.303598 | orchestrator | 22a6c89f5614 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_mdns 2026-01-05 01:20:26.303609 | orchestrator | 33a390542da7 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_producer 2026-01-05 01:20:26.303619 | orchestrator | 83a60b49c237 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_central 2026-01-05 01:20:26.303630 | orchestrator | 9a429417e550 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_api 2026-01-05 01:20:26.303640 | orchestrator | a74d0834b42e registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_backend_bind9 2026-01-05 01:20:26.303651 | orchestrator | cc3d5b167e75 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_worker 2026-01-05 01:20:26.305124 | orchestrator | 1db746854378 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2026-01-05 01:20:26.305187 | orchestrator | 8fa1bd6a263c registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_keystone_listener 2026-01-05 01:20:26.305202 | orchestrator | c7d94046bae2 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_api 2026-01-05 01:20:26.305214 | orchestrator | d19ca15d3aef registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 14 minutes ago Up 11 minutes (healthy) nova_scheduler 2026-01-05 01:20:26.305227 | orchestrator | cce26d5b64ad registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2026-01-05 01:20:26.305274 | orchestrator | 5f395753af05 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_backup 2026-01-05 01:20:26.305285 | orchestrator | f47e4947b5bb registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2026-01-05 01:20:26.305297 | orchestrator | d0163b380b9a registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-01-05 01:20:26.305315 | orchestrator | bbfe80a8ebf3 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_volume 2026-01-05 01:20:26.305326 | orchestrator | 92031cdd3e1d registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2026-01-05 01:20:26.305336 | orchestrator | 240863b850f8 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_scheduler 2026-01-05 01:20:26.305344 | orchestrator | ca857ab28d5b registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-01-05 01:20:26.305350 | orchestrator | f37b38ac64f1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-01-05 01:20:26.305357 | orchestrator | daf61dff2281 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_api 2026-01-05 01:20:26.305364 | orchestrator | 340cb680741f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-0 2026-01-05 01:20:26.305370 | orchestrator | fb8db5563a7c registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-01-05 01:20:26.305377 | orchestrator | eb27c5d2d225 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-01-05 01:20:26.305384 | orchestrator | fc5f3a8a7682 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2026-01-05 01:20:26.305390 | orchestrator | 3ff031940c0b registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2026-01-05 01:20:26.305397 | orchestrator | 398798cd7bac registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-01-05 01:20:26.305405 | orchestrator | dbe9c613bd35 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch_dashboards 2026-01-05 01:20:26.305436 | orchestrator | 169beb157bf2 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch 2026-01-05 01:20:26.305453 | orchestrator | d72d733bcf15 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-0 2026-01-05 01:20:26.305480 | orchestrator | 0f2b9dc7cc6e registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2026-01-05 01:20:26.305492 | orchestrator | 92eda749928c registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2026-01-05 01:20:26.305502 | orchestrator | e47384e054c9 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-01-05 01:20:26.305513 | orchestrator | d3883382dffc registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-01-05 01:20:26.305523 | orchestrator | a1d1ffbf93d8 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-01-05 01:20:26.305533 | orchestrator | 1b4af8141382 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2026-01-05 01:20:26.305544 | orchestrator | 172d64bb54a3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-0 2026-01-05 01:20:26.305554 | orchestrator | 33501216a78a registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-01-05 01:20:26.305564 | orchestrator | f0f7918cb995 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) rabbitmq 2026-01-05 01:20:26.305575 | orchestrator | 346f522c2500 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-01-05 01:20:26.305586 | orchestrator | 28796a9af6a8 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2026-01-05 01:20:26.305596 | orchestrator | 4f4bb2ad4c18 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-01-05 01:20:26.305608 | orchestrator | 347dd0160e06 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-01-05 01:20:26.305620 | orchestrator | 40d8198f2e17 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-01-05 01:20:26.305637 | orchestrator | a77141714d2d registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 33 minutes ago Up 32 minutes cron 2026-01-05 01:20:26.305650 | orchestrator | e5b535f463b8 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-01-05 01:20:26.305657 | orchestrator | bd20dee3c831 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes fluentd 2026-01-05 01:20:26.634041 | orchestrator | 2026-01-05 01:20:26.634169 | orchestrator | ## Images @ testbed-node-0 2026-01-05 01:20:26.634184 | orchestrator | 2026-01-05 01:20:26.634192 | orchestrator | + echo 2026-01-05 01:20:26.634222 | orchestrator | + echo '## Images @ testbed-node-0' 2026-01-05 01:20:26.634231 | orchestrator | + echo 2026-01-05 01:20:26.634237 | orchestrator | + osism container testbed-node-0 images 2026-01-05 01:20:29.148073 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-05 01:20:29.148155 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 5 weeks ago 322MB 2026-01-05 01:20:29.148161 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 5 weeks ago 266MB 2026-01-05 01:20:29.148166 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 5 weeks ago 1.56GB 2026-01-05 01:20:29.148170 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 5 weeks ago 276MB 2026-01-05 01:20:29.148174 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 5 weeks ago 1.53GB 2026-01-05 01:20:29.148178 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-05 01:20:29.148182 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-05 01:20:29.148186 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 5 weeks ago 1.02GB 2026-01-05 01:20:29.148190 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 5 weeks ago 412MB 2026-01-05 01:20:29.148194 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 5 weeks ago 274MB 2026-01-05 01:20:29.148198 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-05 01:20:29.148202 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 5 weeks ago 273MB 2026-01-05 01:20:29.148206 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 5 weeks ago 273MB 2026-01-05 01:20:29.148210 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 5 weeks ago 452MB 2026-01-05 01:20:29.148214 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 5 weeks ago 1.15GB 2026-01-05 01:20:29.148218 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 5 weeks ago 301MB 2026-01-05 01:20:29.148222 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 5 weeks ago 298MB 2026-01-05 01:20:29.148226 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-05 01:20:29.148242 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 5 weeks ago 292MB 2026-01-05 01:20:29.148246 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-05 01:20:29.148250 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 5 weeks ago 279MB 2026-01-05 01:20:29.148254 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 5 weeks ago 975MB 2026-01-05 01:20:29.148258 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 5 weeks ago 279MB 2026-01-05 01:20:29.148261 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 5 weeks ago 1.37GB 2026-01-05 01:20:29.148265 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 5 weeks ago 1.21GB 2026-01-05 01:20:29.148285 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 5 weeks ago 1.21GB 2026-01-05 01:20:29.148289 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 5 weeks ago 1.21GB 2026-01-05 01:20:29.148293 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 5 weeks ago 976MB 2026-01-05 01:20:29.148297 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 5 weeks ago 976MB 2026-01-05 01:20:29.148300 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 5 weeks ago 1.13GB 2026-01-05 01:20:29.148304 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 5 weeks ago 1.24GB 2026-01-05 01:20:29.148320 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 5 weeks ago 974MB 2026-01-05 01:20:29.148324 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 5 weeks ago 974MB 2026-01-05 01:20:29.148328 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 5 weeks ago 974MB 2026-01-05 01:20:29.148332 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 5 weeks ago 973MB 2026-01-05 01:20:29.148335 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 5 weeks ago 991MB 2026-01-05 01:20:29.148339 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 5 weeks ago 991MB 2026-01-05 01:20:29.148343 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 5 weeks ago 990MB 2026-01-05 01:20:29.148347 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 5 weeks ago 1.09GB 2026-01-05 01:20:29.148351 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 5 weeks ago 1.04GB 2026-01-05 01:20:29.148355 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 5 weeks ago 1.04GB 2026-01-05 01:20:29.148361 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 5 weeks ago 1.03GB 2026-01-05 01:20:29.148436 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 5 weeks ago 1.03GB 2026-01-05 01:20:29.148444 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 5 weeks ago 1.05GB 2026-01-05 01:20:29.148450 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 5 weeks ago 1.03GB 2026-01-05 01:20:29.148456 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 5 weeks ago 1.05GB 2026-01-05 01:20:29.148462 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 5 weeks ago 1.16GB 2026-01-05 01:20:29.148468 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 5 weeks ago 1.1GB 2026-01-05 01:20:29.148474 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 5 weeks ago 983MB 2026-01-05 01:20:29.148481 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 5 weeks ago 989MB 2026-01-05 01:20:29.148487 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 5 weeks ago 984MB 2026-01-05 01:20:29.148502 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 5 weeks ago 984MB 2026-01-05 01:20:29.148508 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 5 weeks ago 989MB 2026-01-05 01:20:29.148514 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 5 weeks ago 984MB 2026-01-05 01:20:29.148522 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 5 weeks ago 1.05GB 2026-01-05 01:20:29.148530 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 5 weeks ago 990MB 2026-01-05 01:20:29.148536 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 5 weeks ago 1.72GB 2026-01-05 01:20:29.148542 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 5 weeks ago 1.4GB 2026-01-05 01:20:29.148548 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 5 weeks ago 1.41GB 2026-01-05 01:20:29.148552 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 5 weeks ago 1.4GB 2026-01-05 01:20:29.148556 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 5 weeks ago 840MB 2026-01-05 01:20:29.148561 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 5 weeks ago 840MB 2026-01-05 01:20:29.148564 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 5 weeks ago 840MB 2026-01-05 01:20:29.148574 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 5 weeks ago 840MB 2026-01-05 01:20:29.148578 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-05 01:20:29.504592 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-05 01:20:29.505166 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 01:20:29.549428 | orchestrator | 2026-01-05 01:20:29.549518 | orchestrator | ## Containers @ testbed-node-1 2026-01-05 01:20:29.549533 | orchestrator | 2026-01-05 01:20:29.549545 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 01:20:29.549568 | orchestrator | + echo 2026-01-05 01:20:29.549580 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-01-05 01:20:29.549592 | orchestrator | + echo 2026-01-05 01:20:29.549603 | orchestrator | + osism container testbed-node-1 ps 2026-01-05 01:20:32.158208 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-05 01:20:32.158289 | orchestrator | 53ef11b53c5c registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-05 01:20:32.158298 | orchestrator | 53e5c7d8ffe5 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-01-05 01:20:32.158304 | orchestrator | 1e7f1557319e registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-01-05 01:20:32.158308 | orchestrator | 57518e8f177c registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-05 01:20:32.158325 | orchestrator | 529673cbfbec registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-05 01:20:32.158348 | orchestrator | 8c4526c6dd0a registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-01-05 01:20:32.158353 | orchestrator | cc574985759a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-01-05 01:20:32.158357 | orchestrator | 56e052043974 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) magnum_api 2026-01-05 01:20:32.158362 | orchestrator | b01dae22b674 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) placement_api 2026-01-05 01:20:32.158366 | orchestrator | 5d669296c932 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_novncproxy 2026-01-05 01:20:32.158370 | orchestrator | d47cec421d9c registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2026-01-05 01:20:32.158378 | orchestrator | e1768e410fe2 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) nova_conductor 2026-01-05 01:20:32.158382 | orchestrator | 0618f3dc0c56 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_worker 2026-01-05 01:20:32.158386 | orchestrator | 2ff884e3c90e registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_mdns 2026-01-05 01:20:32.158390 | orchestrator | e7ad5b03f1a2 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_producer 2026-01-05 01:20:32.158395 | orchestrator | c90b77de2e83 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_central 2026-01-05 01:20:32.158399 | orchestrator | 6ab81b4f04f2 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_api 2026-01-05 01:20:32.158403 | orchestrator | 28d887a052b4 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_backend_bind9 2026-01-05 01:20:32.158408 | orchestrator | 6ef25ee5c31f registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_worker 2026-01-05 01:20:32.158423 | orchestrator | 9950e89559bd registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2026-01-05 01:20:32.158428 | orchestrator | e13852e990b8 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_keystone_listener 2026-01-05 01:20:32.158432 | orchestrator | 17b8904ce68e registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 14 minutes ago Up 11 minutes (healthy) nova_scheduler 2026-01-05 01:20:32.158436 | orchestrator | 4c02c75ecc9a registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_api 2026-01-05 01:20:32.158440 | orchestrator | a1586568b9f8 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2026-01-05 01:20:32.158449 | orchestrator | f9138ffc93ed registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2026-01-05 01:20:32.158455 | orchestrator | 723debd67204 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_backup 2026-01-05 01:20:32.158462 | orchestrator | dfcde10fd6f7 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_volume 2026-01-05 01:20:32.158467 | orchestrator | 05bde7505192 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_cadvisor 2026-01-05 01:20:32.158471 | orchestrator | 375d11e7041b registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2026-01-05 01:20:32.158475 | orchestrator | 679ef3a1311d registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_scheduler 2026-01-05 01:20:32.158479 | orchestrator | 77a3b2ffdbe3 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-01-05 01:20:32.158483 | orchestrator | 25c35c83539b registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_api 2026-01-05 01:20:32.158488 | orchestrator | 5def4f78edeb registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-01-05 01:20:32.158492 | orchestrator | f13f943ab6f7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-1 2026-01-05 01:20:32.158496 | orchestrator | 46f05bc30dcc registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-01-05 01:20:32.158500 | orchestrator | b3b0f488fdb4 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-01-05 01:20:32.158504 | orchestrator | 281dd012b692 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-01-05 01:20:32.158508 | orchestrator | bea7d00a0d49 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2026-01-05 01:20:32.158513 | orchestrator | 08bd231ecd1f registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2026-01-05 01:20:32.158517 | orchestrator | 903b025f045f registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 24 minutes ago Up 24 minutes (healthy) mariadb 2026-01-05 01:20:32.158524 | orchestrator | 0b036dd5936e registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-01-05 01:20:32.158529 | orchestrator | fb4e177361ac registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-1 2026-01-05 01:20:32.158537 | orchestrator | fcc4ce3b9f91 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2026-01-05 01:20:32.158541 | orchestrator | da3f850c16c1 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2026-01-05 01:20:32.158545 | orchestrator | d28c4ef93140 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-01-05 01:20:32.158550 | orchestrator | d07d027c0a86 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-01-05 01:20:32.158554 | orchestrator | c2c9c310076e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-01-05 01:20:32.158558 | orchestrator | 69773d6a5106 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2026-01-05 01:20:32.158562 | orchestrator | 8e1318bb59b1 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-01-05 01:20:32.158566 | orchestrator | 839a2a48b23d registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-01-05 01:20:32.158571 | orchestrator | abbb29a068db registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-1 2026-01-05 01:20:32.158578 | orchestrator | edc08a34721f registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-01-05 01:20:32.158582 | orchestrator | 72f5eed4665d registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2026-01-05 01:20:32.158586 | orchestrator | fccd4c3d0a29 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-01-05 01:20:32.158591 | orchestrator | 899723517dca registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-01-05 01:20:32.158595 | orchestrator | c4f78990263a registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-01-05 01:20:32.158599 | orchestrator | 334850be07ec registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2026-01-05 01:20:32.158603 | orchestrator | 3b52d985b8d6 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-01-05 01:20:32.158607 | orchestrator | 8b04d0e26866 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes fluentd 2026-01-05 01:20:32.502282 | orchestrator | 2026-01-05 01:20:32.502379 | orchestrator | ## Images @ testbed-node-1 2026-01-05 01:20:32.502389 | orchestrator | 2026-01-05 01:20:32.502396 | orchestrator | + echo 2026-01-05 01:20:32.502424 | orchestrator | + echo '## Images @ testbed-node-1' 2026-01-05 01:20:32.502476 | orchestrator | + echo 2026-01-05 01:20:32.502698 | orchestrator | + osism container testbed-node-1 images 2026-01-05 01:20:35.105705 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-05 01:20:35.105923 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 5 weeks ago 322MB 2026-01-05 01:20:35.105947 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 5 weeks ago 266MB 2026-01-05 01:20:35.105958 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 5 weeks ago 1.56GB 2026-01-05 01:20:35.105968 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 5 weeks ago 1.53GB 2026-01-05 01:20:35.105978 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 5 weeks ago 276MB 2026-01-05 01:20:35.105988 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-05 01:20:35.105998 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-05 01:20:35.106010 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 5 weeks ago 1.02GB 2026-01-05 01:20:35.106145 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 5 weeks ago 412MB 2026-01-05 01:20:35.106156 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 5 weeks ago 274MB 2026-01-05 01:20:35.106166 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-05 01:20:35.106176 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 5 weeks ago 273MB 2026-01-05 01:20:35.106186 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 5 weeks ago 273MB 2026-01-05 01:20:35.106196 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 5 weeks ago 452MB 2026-01-05 01:20:35.106206 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 5 weeks ago 1.15GB 2026-01-05 01:20:35.106215 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 5 weeks ago 301MB 2026-01-05 01:20:35.106225 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 5 weeks ago 298MB 2026-01-05 01:20:35.106236 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-05 01:20:35.106248 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 5 weeks ago 292MB 2026-01-05 01:20:35.106260 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-05 01:20:35.106271 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 5 weeks ago 279MB 2026-01-05 01:20:35.106283 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 5 weeks ago 975MB 2026-01-05 01:20:35.106294 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 5 weeks ago 279MB 2026-01-05 01:20:35.106305 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 5 weeks ago 1.37GB 2026-01-05 01:20:35.106316 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 5 weeks ago 1.21GB 2026-01-05 01:20:35.106353 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 5 weeks ago 1.21GB 2026-01-05 01:20:35.106365 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 5 weeks ago 1.21GB 2026-01-05 01:20:35.106377 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 5 weeks ago 1.13GB 2026-01-05 01:20:35.106388 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 5 weeks ago 1.24GB 2026-01-05 01:20:35.106398 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 5 weeks ago 991MB 2026-01-05 01:20:35.106424 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 5 weeks ago 991MB 2026-01-05 01:20:35.106455 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 5 weeks ago 990MB 2026-01-05 01:20:35.106465 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 5 weeks ago 1.09GB 2026-01-05 01:20:35.106476 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 5 weeks ago 1.04GB 2026-01-05 01:20:35.106486 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 5 weeks ago 1.04GB 2026-01-05 01:20:35.106496 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 5 weeks ago 1.03GB 2026-01-05 01:20:35.106506 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 5 weeks ago 1.03GB 2026-01-05 01:20:35.106515 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 5 weeks ago 1.05GB 2026-01-05 01:20:35.106525 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 5 weeks ago 1.03GB 2026-01-05 01:20:35.107271 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 5 weeks ago 1.05GB 2026-01-05 01:20:35.107370 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 5 weeks ago 1.16GB 2026-01-05 01:20:35.107392 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 5 weeks ago 1.1GB 2026-01-05 01:20:35.107402 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 5 weeks ago 983MB 2026-01-05 01:20:35.107411 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 5 weeks ago 989MB 2026-01-05 01:20:35.107436 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 5 weeks ago 984MB 2026-01-05 01:20:35.107445 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 5 weeks ago 984MB 2026-01-05 01:20:35.107454 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 5 weeks ago 989MB 2026-01-05 01:20:35.107463 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 5 weeks ago 984MB 2026-01-05 01:20:35.107472 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 5 weeks ago 1.72GB 2026-01-05 01:20:35.107481 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 5 weeks ago 1.4GB 2026-01-05 01:20:35.107490 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 5 weeks ago 1.41GB 2026-01-05 01:20:35.107521 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 5 weeks ago 1.4GB 2026-01-05 01:20:35.107535 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 5 weeks ago 840MB 2026-01-05 01:20:35.107550 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 5 weeks ago 840MB 2026-01-05 01:20:35.107569 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 5 weeks ago 840MB 2026-01-05 01:20:35.107591 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 5 weeks ago 840MB 2026-01-05 01:20:35.107604 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-05 01:20:35.472944 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-05 01:20:35.473509 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 01:20:35.526230 | orchestrator | 2026-01-05 01:20:35.526318 | orchestrator | ## Containers @ testbed-node-2 2026-01-05 01:20:35.526327 | orchestrator | 2026-01-05 01:20:35.526334 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 01:20:35.526341 | orchestrator | + echo 2026-01-05 01:20:35.526348 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-01-05 01:20:35.526356 | orchestrator | + echo 2026-01-05 01:20:35.526362 | orchestrator | + osism container testbed-node-2 ps 2026-01-05 01:20:37.995206 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-05 01:20:37.995329 | orchestrator | 7db950148103 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-01-05 01:20:37.995346 | orchestrator | 2f225a753ace registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-01-05 01:20:37.995357 | orchestrator | 8da44e2ecf5a registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-01-05 01:20:37.995367 | orchestrator | c04ee17b44f7 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-05 01:20:37.995377 | orchestrator | 39d734996384 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-05 01:20:37.995387 | orchestrator | 5ec10c336868 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-01-05 01:20:37.995396 | orchestrator | b356ae7d000d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-01-05 01:20:37.995406 | orchestrator | dd0366daa4fd registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) magnum_api 2026-01-05 01:20:37.995416 | orchestrator | 04d38ccf01b2 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) placement_api 2026-01-05 01:20:37.995425 | orchestrator | 188479cdde66 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_novncproxy 2026-01-05 01:20:37.995435 | orchestrator | 6160de1c6942 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2026-01-05 01:20:37.995476 | orchestrator | 92c64f8efce9 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) nova_conductor 2026-01-05 01:20:37.995508 | orchestrator | 949a50bb3c63 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_worker 2026-01-05 01:20:37.995538 | orchestrator | 3b5e15e1a58e registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_mdns 2026-01-05 01:20:37.995554 | orchestrator | d4a75381492f registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_producer 2026-01-05 01:20:37.995570 | orchestrator | e1c61fd74dd1 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) designate_central 2026-01-05 01:20:37.995586 | orchestrator | b71330ca6fe1 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) designate_api 2026-01-05 01:20:37.995603 | orchestrator | 16fcd799aed8 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_backend_bind9 2026-01-05 01:20:37.995619 | orchestrator | 80b1e5d89857 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_worker 2026-01-05 01:20:37.995661 | orchestrator | 461f594e902b registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2026-01-05 01:20:37.995680 | orchestrator | 1e305f1c0c02 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_keystone_listener 2026-01-05 01:20:37.995696 | orchestrator | aff6877618e2 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 14 minutes ago Up 11 minutes (healthy) nova_scheduler 2026-01-05 01:20:37.995714 | orchestrator | aee30c486135 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_api 2026-01-05 01:20:37.995730 | orchestrator | 6494cefc1537 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) glance_api 2026-01-05 01:20:37.995746 | orchestrator | ef7624a4b88a registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2026-01-05 01:20:37.995765 | orchestrator | 486f9b59acb8 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_backup 2026-01-05 01:20:37.995780 | orchestrator | e0bb197088df registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_volume 2026-01-05 01:20:37.995797 | orchestrator | e9defc720549 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_cadvisor 2026-01-05 01:20:37.995813 | orchestrator | 7901aff8af60 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2026-01-05 01:20:37.995878 | orchestrator | 3a372b0679f1 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_scheduler 2026-01-05 01:20:37.995896 | orchestrator | 6bfd9dd82bd8 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-01-05 01:20:37.995907 | orchestrator | 188d19a91340 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) cinder_api 2026-01-05 01:20:37.995916 | orchestrator | 9c3e8294fd6d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2026-01-05 01:20:37.995926 | orchestrator | c16854213a46 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-2 2026-01-05 01:20:37.995935 | orchestrator | 39b628f63a5c registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-01-05 01:20:37.995945 | orchestrator | 7e404625deb0 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2026-01-05 01:20:37.995955 | orchestrator | 2f87dfc313bc registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_fernet 2026-01-05 01:20:37.995964 | orchestrator | 69b38c52c566 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2026-01-05 01:20:37.995973 | orchestrator | 2068bde43d46 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2026-01-05 01:20:37.995983 | orchestrator | 5c694cc13480 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-01-05 01:20:37.996003 | orchestrator | ddab8481f412 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-01-05 01:20:37.996013 | orchestrator | 2b25e6118d8d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-2 2026-01-05 01:20:37.996023 | orchestrator | 6eb613382e11 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2026-01-05 01:20:37.996033 | orchestrator | dbc899a373df registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2026-01-05 01:20:37.996042 | orchestrator | 5b796ea98ffb registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-01-05 01:20:37.996200 | orchestrator | 90efd724b5e3 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-01-05 01:20:37.996226 | orchestrator | 418264c8e03c registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-01-05 01:20:37.996263 | orchestrator | 0f4c56bf7ed2 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 29 minutes ovn_nb_db 2026-01-05 01:20:37.996275 | orchestrator | fd6daa95be52 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-01-05 01:20:37.996285 | orchestrator | 62d380bde33c registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-01-05 01:20:37.996295 | orchestrator | 449bff127ddd registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-2 2026-01-05 01:20:37.996305 | orchestrator | e1fea7c71f25 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-01-05 01:20:37.996321 | orchestrator | 275b8a18954b registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2026-01-05 01:20:37.996331 | orchestrator | 381973c9bfd1 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-01-05 01:20:37.996341 | orchestrator | 281b21e91584 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-01-05 01:20:37.996351 | orchestrator | 5695590fb3b6 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-01-05 01:20:37.996361 | orchestrator | 960dc7559e1f registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes cron 2026-01-05 01:20:37.996370 | orchestrator | 35dbe5dc5441 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-01-05 01:20:37.996380 | orchestrator | 1d609ba01def registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes fluentd 2026-01-05 01:20:38.350652 | orchestrator | 2026-01-05 01:20:38.350771 | orchestrator | ## Images @ testbed-node-2 2026-01-05 01:20:38.350784 | orchestrator | 2026-01-05 01:20:38.350794 | orchestrator | + echo 2026-01-05 01:20:38.350803 | orchestrator | + echo '## Images @ testbed-node-2' 2026-01-05 01:20:38.350813 | orchestrator | + echo 2026-01-05 01:20:38.350822 | orchestrator | + osism container testbed-node-2 images 2026-01-05 01:20:40.812005 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-05 01:20:40.812134 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 5 weeks ago 322MB 2026-01-05 01:20:40.812158 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 5 weeks ago 266MB 2026-01-05 01:20:40.812175 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 5 weeks ago 1.56GB 2026-01-05 01:20:40.812191 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 5 weeks ago 276MB 2026-01-05 01:20:40.812209 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 5 weeks ago 1.53GB 2026-01-05 01:20:40.812225 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-05 01:20:40.812269 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-05 01:20:40.812286 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 5 weeks ago 1.02GB 2026-01-05 01:20:40.812304 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 5 weeks ago 412MB 2026-01-05 01:20:40.812321 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 5 weeks ago 274MB 2026-01-05 01:20:40.812338 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-05 01:20:40.812348 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 5 weeks ago 273MB 2026-01-05 01:20:40.812357 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 5 weeks ago 273MB 2026-01-05 01:20:40.812367 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 5 weeks ago 452MB 2026-01-05 01:20:40.812377 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 5 weeks ago 1.15GB 2026-01-05 01:20:40.812386 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 5 weeks ago 301MB 2026-01-05 01:20:40.812396 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 5 weeks ago 298MB 2026-01-05 01:20:40.812405 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-05 01:20:40.812415 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 5 weeks ago 292MB 2026-01-05 01:20:40.812424 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-05 01:20:40.812434 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 5 weeks ago 279MB 2026-01-05 01:20:40.812444 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 5 weeks ago 279MB 2026-01-05 01:20:40.812453 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 5 weeks ago 975MB 2026-01-05 01:20:40.812463 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 5 weeks ago 1.37GB 2026-01-05 01:20:40.812472 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 5 weeks ago 1.21GB 2026-01-05 01:20:40.812482 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 5 weeks ago 1.21GB 2026-01-05 01:20:40.812491 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 5 weeks ago 1.21GB 2026-01-05 01:20:40.812501 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 5 weeks ago 1.13GB 2026-01-05 01:20:40.812513 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 5 weeks ago 1.24GB 2026-01-05 01:20:40.812524 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 5 weeks ago 991MB 2026-01-05 01:20:40.812535 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 5 weeks ago 991MB 2026-01-05 01:20:40.812568 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 5 weeks ago 990MB 2026-01-05 01:20:40.812581 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 5 weeks ago 1.09GB 2026-01-05 01:20:40.812598 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 5 weeks ago 1.04GB 2026-01-05 01:20:40.812609 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 5 weeks ago 1.04GB 2026-01-05 01:20:40.812618 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 5 weeks ago 1.03GB 2026-01-05 01:20:40.812628 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 5 weeks ago 1.03GB 2026-01-05 01:20:40.812638 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 5 weeks ago 1.05GB 2026-01-05 01:20:40.812647 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 5 weeks ago 1.03GB 2026-01-05 01:20:40.812657 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 5 weeks ago 1.05GB 2026-01-05 01:20:40.812667 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 5 weeks ago 1.16GB 2026-01-05 01:20:40.812676 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 5 weeks ago 1.1GB 2026-01-05 01:20:40.812702 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 5 weeks ago 983MB 2026-01-05 01:20:40.812712 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 5 weeks ago 989MB 2026-01-05 01:20:40.812722 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 5 weeks ago 984MB 2026-01-05 01:20:40.812732 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 5 weeks ago 984MB 2026-01-05 01:20:40.812741 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 5 weeks ago 989MB 2026-01-05 01:20:40.812751 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 5 weeks ago 984MB 2026-01-05 01:20:40.812761 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 5 weeks ago 1.72GB 2026-01-05 01:20:40.812770 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 5 weeks ago 1.4GB 2026-01-05 01:20:40.812780 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 5 weeks ago 1.41GB 2026-01-05 01:20:40.812789 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 5 weeks ago 1.4GB 2026-01-05 01:20:40.812799 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 5 weeks ago 840MB 2026-01-05 01:20:40.812814 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 5 weeks ago 840MB 2026-01-05 01:20:40.812824 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 5 weeks ago 840MB 2026-01-05 01:20:40.812833 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 5 weeks ago 840MB 2026-01-05 01:20:40.812843 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-05 01:20:41.130907 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-01-05 01:20:41.140104 | orchestrator | + set -e 2026-01-05 01:20:41.140206 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 01:20:41.141526 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 01:20:41.141582 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 01:20:41.141636 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 01:20:41.141656 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 01:20:41.141680 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 01:20:41.141701 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 01:20:41.141720 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 01:20:41.141738 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 01:20:41.141755 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 01:20:41.141773 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 01:20:41.141792 | orchestrator | ++ export ARA=false 2026-01-05 01:20:41.141808 | orchestrator | ++ ARA=false 2026-01-05 01:20:41.141826 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 01:20:41.141843 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 01:20:41.141937 | orchestrator | ++ export TEMPEST=true 2026-01-05 01:20:41.141959 | orchestrator | ++ TEMPEST=true 2026-01-05 01:20:41.141976 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 01:20:41.141994 | orchestrator | ++ IS_ZUUL=true 2026-01-05 01:20:41.142012 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 01:20:41.142109 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 01:20:41.142127 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 01:20:41.142145 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 01:20:41.142163 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 01:20:41.142181 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 01:20:41.142199 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 01:20:41.142217 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 01:20:41.142235 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 01:20:41.142253 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 01:20:41.142274 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 01:20:41.142294 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-01-05 01:20:41.152638 | orchestrator | + set -e 2026-01-05 01:20:41.152731 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 01:20:41.152752 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 01:20:41.152770 | orchestrator | ++ INTERACTIVE=false 2026-01-05 01:20:41.152787 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 01:20:41.152803 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 01:20:41.152820 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-05 01:20:41.153711 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-05 01:20:41.158952 | orchestrator | 2026-01-05 01:20:41.158998 | orchestrator | # Ceph status 2026-01-05 01:20:41.159008 | orchestrator | 2026-01-05 01:20:41.159019 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 01:20:41.159030 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 01:20:41.159040 | orchestrator | + echo 2026-01-05 01:20:41.159050 | orchestrator | + echo '# Ceph status' 2026-01-05 01:20:41.159060 | orchestrator | + echo 2026-01-05 01:20:41.159070 | orchestrator | + ceph -s 2026-01-05 01:20:41.743222 | orchestrator | cluster: 2026-01-05 01:20:41.743336 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-01-05 01:20:41.743357 | orchestrator | health: HEALTH_OK 2026-01-05 01:20:41.743370 | orchestrator | 2026-01-05 01:20:41.743384 | orchestrator | services: 2026-01-05 01:20:41.743398 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 30m) 2026-01-05 01:20:41.743426 | orchestrator | mgr: testbed-node-2(active, since 18m), standbys: testbed-node-0, testbed-node-1 2026-01-05 01:20:41.743444 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-01-05 01:20:41.743458 | orchestrator | osd: 6 osds: 6 up (since 27m), 6 in (since 27m) 2026-01-05 01:20:41.743473 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-01-05 01:20:41.743487 | orchestrator | 2026-01-05 01:20:41.743501 | orchestrator | data: 2026-01-05 01:20:41.743515 | orchestrator | volumes: 1/1 healthy 2026-01-05 01:20:41.743528 | orchestrator | pools: 14 pools, 401 pgs 2026-01-05 01:20:41.743541 | orchestrator | objects: 555 objects, 2.2 GiB 2026-01-05 01:20:41.743554 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-01-05 01:20:41.743568 | orchestrator | pgs: 401 active+clean 2026-01-05 01:20:41.743580 | orchestrator | 2026-01-05 01:20:41.790472 | orchestrator | 2026-01-05 01:20:41.790579 | orchestrator | # Ceph versions 2026-01-05 01:20:41.790595 | orchestrator | 2026-01-05 01:20:41.790608 | orchestrator | + echo 2026-01-05 01:20:41.790619 | orchestrator | + echo '# Ceph versions' 2026-01-05 01:20:41.790631 | orchestrator | + echo 2026-01-05 01:20:41.790641 | orchestrator | + ceph versions 2026-01-05 01:20:42.373733 | orchestrator | { 2026-01-05 01:20:42.373837 | orchestrator | "mon": { 2026-01-05 01:20:42.373851 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-05 01:20:42.373937 | orchestrator | }, 2026-01-05 01:20:42.373950 | orchestrator | "mgr": { 2026-01-05 01:20:42.373960 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-05 01:20:42.373970 | orchestrator | }, 2026-01-05 01:20:42.373980 | orchestrator | "osd": { 2026-01-05 01:20:42.373990 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-01-05 01:20:42.373999 | orchestrator | }, 2026-01-05 01:20:42.374089 | orchestrator | "mds": { 2026-01-05 01:20:42.374099 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-05 01:20:42.374109 | orchestrator | }, 2026-01-05 01:20:42.374118 | orchestrator | "rgw": { 2026-01-05 01:20:42.374128 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-05 01:20:42.374138 | orchestrator | }, 2026-01-05 01:20:42.374148 | orchestrator | "overall": { 2026-01-05 01:20:42.374158 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-01-05 01:20:42.374170 | orchestrator | } 2026-01-05 01:20:42.374180 | orchestrator | } 2026-01-05 01:20:42.417763 | orchestrator | 2026-01-05 01:20:42.417845 | orchestrator | # Ceph OSD tree 2026-01-05 01:20:42.417854 | orchestrator | 2026-01-05 01:20:42.417861 | orchestrator | + echo 2026-01-05 01:20:42.417888 | orchestrator | + echo '# Ceph OSD tree' 2026-01-05 01:20:42.417896 | orchestrator | + echo 2026-01-05 01:20:42.417902 | orchestrator | + ceph osd df tree 2026-01-05 01:20:42.954437 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-01-05 01:20:42.954559 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-01-05 01:20:42.954575 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-01-05 01:20:42.954587 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1019 MiB 1 KiB 70 MiB 19 GiB 5.32 0.90 189 up osd.0 2026-01-05 01:20:42.954598 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.51 1.10 201 up osd.3 2026-01-05 01:20:42.954609 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-01-05 01:20:42.954620 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.63 1.12 190 up osd.1 2026-01-05 01:20:42.954631 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 995 MiB 1 KiB 70 MiB 19 GiB 5.20 0.88 202 up osd.4 2026-01-05 01:20:42.954642 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-01-05 01:20:42.954654 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.77 1.14 191 up osd.2 2026-01-05 01:20:42.954665 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 963 MiB 1 KiB 74 MiB 19 GiB 5.07 0.86 197 up osd.5 2026-01-05 01:20:42.954676 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-01-05 01:20:42.954687 | orchestrator | MIN/MAX VAR: 0.86/1.14 STDDEV: 0.73 2026-01-05 01:20:43.011356 | orchestrator | 2026-01-05 01:20:43.011455 | orchestrator | # Ceph monitor status 2026-01-05 01:20:43.011472 | orchestrator | 2026-01-05 01:20:43.011484 | orchestrator | + echo 2026-01-05 01:20:43.011496 | orchestrator | + echo '# Ceph monitor status' 2026-01-05 01:20:43.011507 | orchestrator | + echo 2026-01-05 01:20:43.011518 | orchestrator | + ceph mon stat 2026-01-05 01:20:43.597552 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-01-05 01:20:43.639177 | orchestrator | 2026-01-05 01:20:43.639337 | orchestrator | # Ceph quorum status 2026-01-05 01:20:43.639378 | orchestrator | 2026-01-05 01:20:43.639397 | orchestrator | + echo 2026-01-05 01:20:43.639416 | orchestrator | + echo '# Ceph quorum status' 2026-01-05 01:20:43.639436 | orchestrator | + echo 2026-01-05 01:20:43.639456 | orchestrator | + ceph quorum_status 2026-01-05 01:20:43.639495 | orchestrator | + jq 2026-01-05 01:20:44.284357 | orchestrator | { 2026-01-05 01:20:44.284454 | orchestrator | "election_epoch": 8, 2026-01-05 01:20:44.284467 | orchestrator | "quorum": [ 2026-01-05 01:20:44.284474 | orchestrator | 0, 2026-01-05 01:20:44.284480 | orchestrator | 1, 2026-01-05 01:20:44.284488 | orchestrator | 2 2026-01-05 01:20:44.284494 | orchestrator | ], 2026-01-05 01:20:44.284500 | orchestrator | "quorum_names": [ 2026-01-05 01:20:44.284507 | orchestrator | "testbed-node-0", 2026-01-05 01:20:44.284514 | orchestrator | "testbed-node-1", 2026-01-05 01:20:44.284520 | orchestrator | "testbed-node-2" 2026-01-05 01:20:44.284528 | orchestrator | ], 2026-01-05 01:20:44.284533 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-01-05 01:20:44.284539 | orchestrator | "quorum_age": 1851, 2026-01-05 01:20:44.284544 | orchestrator | "features": { 2026-01-05 01:20:44.284548 | orchestrator | "quorum_con": "4540138322906710015", 2026-01-05 01:20:44.284552 | orchestrator | "quorum_mon": [ 2026-01-05 01:20:44.284557 | orchestrator | "kraken", 2026-01-05 01:20:44.284561 | orchestrator | "luminous", 2026-01-05 01:20:44.284565 | orchestrator | "mimic", 2026-01-05 01:20:44.284569 | orchestrator | "osdmap-prune", 2026-01-05 01:20:44.284573 | orchestrator | "nautilus", 2026-01-05 01:20:44.284577 | orchestrator | "octopus", 2026-01-05 01:20:44.284581 | orchestrator | "pacific", 2026-01-05 01:20:44.284585 | orchestrator | "elector-pinging", 2026-01-05 01:20:44.284589 | orchestrator | "quincy", 2026-01-05 01:20:44.284593 | orchestrator | "reef" 2026-01-05 01:20:44.284597 | orchestrator | ] 2026-01-05 01:20:44.284601 | orchestrator | }, 2026-01-05 01:20:44.284605 | orchestrator | "monmap": { 2026-01-05 01:20:44.284609 | orchestrator | "epoch": 1, 2026-01-05 01:20:44.284613 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-01-05 01:20:44.284617 | orchestrator | "modified": "2026-01-05T00:49:33.932273Z", 2026-01-05 01:20:44.284621 | orchestrator | "created": "2026-01-05T00:49:33.932273Z", 2026-01-05 01:20:44.284626 | orchestrator | "min_mon_release": 18, 2026-01-05 01:20:44.284630 | orchestrator | "min_mon_release_name": "reef", 2026-01-05 01:20:44.284634 | orchestrator | "election_strategy": 1, 2026-01-05 01:20:44.284637 | orchestrator | "disallowed_leaders: ": "", 2026-01-05 01:20:44.284642 | orchestrator | "stretch_mode": false, 2026-01-05 01:20:44.284646 | orchestrator | "tiebreaker_mon": "", 2026-01-05 01:20:44.284650 | orchestrator | "removed_ranks: ": "", 2026-01-05 01:20:44.284653 | orchestrator | "features": { 2026-01-05 01:20:44.284657 | orchestrator | "persistent": [ 2026-01-05 01:20:44.284661 | orchestrator | "kraken", 2026-01-05 01:20:44.284665 | orchestrator | "luminous", 2026-01-05 01:20:44.284669 | orchestrator | "mimic", 2026-01-05 01:20:44.284674 | orchestrator | "osdmap-prune", 2026-01-05 01:20:44.284678 | orchestrator | "nautilus", 2026-01-05 01:20:44.284682 | orchestrator | "octopus", 2026-01-05 01:20:44.284685 | orchestrator | "pacific", 2026-01-05 01:20:44.284689 | orchestrator | "elector-pinging", 2026-01-05 01:20:44.284693 | orchestrator | "quincy", 2026-01-05 01:20:44.284697 | orchestrator | "reef" 2026-01-05 01:20:44.284701 | orchestrator | ], 2026-01-05 01:20:44.284705 | orchestrator | "optional": [] 2026-01-05 01:20:44.284709 | orchestrator | }, 2026-01-05 01:20:44.284713 | orchestrator | "mons": [ 2026-01-05 01:20:44.284716 | orchestrator | { 2026-01-05 01:20:44.284720 | orchestrator | "rank": 0, 2026-01-05 01:20:44.284724 | orchestrator | "name": "testbed-node-0", 2026-01-05 01:20:44.284728 | orchestrator | "public_addrs": { 2026-01-05 01:20:44.284732 | orchestrator | "addrvec": [ 2026-01-05 01:20:44.284736 | orchestrator | { 2026-01-05 01:20:44.284740 | orchestrator | "type": "v2", 2026-01-05 01:20:44.284744 | orchestrator | "addr": "192.168.16.10:3300", 2026-01-05 01:20:44.284748 | orchestrator | "nonce": 0 2026-01-05 01:20:44.284752 | orchestrator | }, 2026-01-05 01:20:44.284756 | orchestrator | { 2026-01-05 01:20:44.284760 | orchestrator | "type": "v1", 2026-01-05 01:20:44.284763 | orchestrator | "addr": "192.168.16.10:6789", 2026-01-05 01:20:44.284767 | orchestrator | "nonce": 0 2026-01-05 01:20:44.284771 | orchestrator | } 2026-01-05 01:20:44.284775 | orchestrator | ] 2026-01-05 01:20:44.284780 | orchestrator | }, 2026-01-05 01:20:44.284786 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-01-05 01:20:44.284793 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-01-05 01:20:44.284825 | orchestrator | "priority": 0, 2026-01-05 01:20:44.284832 | orchestrator | "weight": 0, 2026-01-05 01:20:44.284838 | orchestrator | "crush_location": "{}" 2026-01-05 01:20:44.284845 | orchestrator | }, 2026-01-05 01:20:44.284853 | orchestrator | { 2026-01-05 01:20:44.284857 | orchestrator | "rank": 1, 2026-01-05 01:20:44.284861 | orchestrator | "name": "testbed-node-1", 2026-01-05 01:20:44.284865 | orchestrator | "public_addrs": { 2026-01-05 01:20:44.284869 | orchestrator | "addrvec": [ 2026-01-05 01:20:44.284873 | orchestrator | { 2026-01-05 01:20:44.284878 | orchestrator | "type": "v2", 2026-01-05 01:20:44.284928 | orchestrator | "addr": "192.168.16.11:3300", 2026-01-05 01:20:44.284933 | orchestrator | "nonce": 0 2026-01-05 01:20:44.284949 | orchestrator | }, 2026-01-05 01:20:44.284954 | orchestrator | { 2026-01-05 01:20:44.284959 | orchestrator | "type": "v1", 2026-01-05 01:20:44.284963 | orchestrator | "addr": "192.168.16.11:6789", 2026-01-05 01:20:44.284968 | orchestrator | "nonce": 0 2026-01-05 01:20:44.284972 | orchestrator | } 2026-01-05 01:20:44.284977 | orchestrator | ] 2026-01-05 01:20:44.284981 | orchestrator | }, 2026-01-05 01:20:44.284985 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-01-05 01:20:44.284990 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-01-05 01:20:44.284994 | orchestrator | "priority": 0, 2026-01-05 01:20:44.284999 | orchestrator | "weight": 0, 2026-01-05 01:20:44.285003 | orchestrator | "crush_location": "{}" 2026-01-05 01:20:44.285008 | orchestrator | }, 2026-01-05 01:20:44.285012 | orchestrator | { 2026-01-05 01:20:44.285018 | orchestrator | "rank": 2, 2026-01-05 01:20:44.285024 | orchestrator | "name": "testbed-node-2", 2026-01-05 01:20:44.285035 | orchestrator | "public_addrs": { 2026-01-05 01:20:44.285043 | orchestrator | "addrvec": [ 2026-01-05 01:20:44.285049 | orchestrator | { 2026-01-05 01:20:44.285055 | orchestrator | "type": "v2", 2026-01-05 01:20:44.285063 | orchestrator | "addr": "192.168.16.12:3300", 2026-01-05 01:20:44.285070 | orchestrator | "nonce": 0 2026-01-05 01:20:44.285077 | orchestrator | }, 2026-01-05 01:20:44.285085 | orchestrator | { 2026-01-05 01:20:44.285089 | orchestrator | "type": "v1", 2026-01-05 01:20:44.285094 | orchestrator | "addr": "192.168.16.12:6789", 2026-01-05 01:20:44.285099 | orchestrator | "nonce": 0 2026-01-05 01:20:44.285103 | orchestrator | } 2026-01-05 01:20:44.285108 | orchestrator | ] 2026-01-05 01:20:44.285112 | orchestrator | }, 2026-01-05 01:20:44.285117 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-01-05 01:20:44.285121 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-01-05 01:20:44.285129 | orchestrator | "priority": 0, 2026-01-05 01:20:44.285134 | orchestrator | "weight": 0, 2026-01-05 01:20:44.285139 | orchestrator | "crush_location": "{}" 2026-01-05 01:20:44.285144 | orchestrator | } 2026-01-05 01:20:44.285151 | orchestrator | ] 2026-01-05 01:20:44.285158 | orchestrator | } 2026-01-05 01:20:44.285163 | orchestrator | } 2026-01-05 01:20:44.285265 | orchestrator | 2026-01-05 01:20:44.285272 | orchestrator | # Ceph free space status 2026-01-05 01:20:44.285276 | orchestrator | 2026-01-05 01:20:44.285280 | orchestrator | + echo 2026-01-05 01:20:44.285284 | orchestrator | + echo '# Ceph free space status' 2026-01-05 01:20:44.285288 | orchestrator | + echo 2026-01-05 01:20:44.285292 | orchestrator | + ceph df 2026-01-05 01:20:44.881364 | orchestrator | --- RAW STORAGE --- 2026-01-05 01:20:44.881494 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-01-05 01:20:44.881533 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-05 01:20:44.881550 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-05 01:20:44.881566 | orchestrator | 2026-01-05 01:20:44.881584 | orchestrator | --- POOLS --- 2026-01-05 01:20:44.881601 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-01-05 01:20:44.881618 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-01-05 01:20:44.881634 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-01-05 01:20:44.881651 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-01-05 01:20:44.881668 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-01-05 01:20:44.881684 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-01-05 01:20:44.881726 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-01-05 01:20:44.881737 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-01-05 01:20:44.881746 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-01-05 01:20:44.881756 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-01-05 01:20:44.881766 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-01-05 01:20:44.881775 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-01-05 01:20:44.881785 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2026-01-05 01:20:44.881795 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-01-05 01:20:44.881804 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-01-05 01:20:44.929009 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 01:20:44.989436 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 01:20:44.989530 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-01-05 01:20:44.989546 | orchestrator | + osism apply facts 2026-01-05 01:20:47.200390 | orchestrator | 2026-01-05 01:20:47 | INFO  | Task 6242f8cd-feb8-40ea-bb1d-59719b6711f1 (facts) was prepared for execution. 2026-01-05 01:20:47.200524 | orchestrator | 2026-01-05 01:20:47 | INFO  | It takes a moment until task 6242f8cd-feb8-40ea-bb1d-59719b6711f1 (facts) has been started and output is visible here. 2026-01-05 01:21:01.705700 | orchestrator | 2026-01-05 01:21:01.705823 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 01:21:01.705845 | orchestrator | 2026-01-05 01:21:01.705861 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 01:21:01.705876 | orchestrator | Monday 05 January 2026 01:20:51 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-01-05 01:21:01.705890 | orchestrator | ok: [testbed-manager] 2026-01-05 01:21:01.705906 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:01.705921 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:01.705935 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:01.705949 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:21:01.705963 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:21:01.705978 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:21:01.706110 | orchestrator | 2026-01-05 01:21:01.706130 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 01:21:01.706145 | orchestrator | Monday 05 January 2026 01:20:53 +0000 (0:00:01.634) 0:00:01.913 ******** 2026-01-05 01:21:01.706158 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:21:01.706173 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:01.706187 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:21:01.706202 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:21:01.706216 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:21:01.706230 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:21:01.706244 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:21:01.706257 | orchestrator | 2026-01-05 01:21:01.706271 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 01:21:01.706285 | orchestrator | 2026-01-05 01:21:01.706299 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 01:21:01.706314 | orchestrator | Monday 05 January 2026 01:20:54 +0000 (0:00:01.337) 0:00:03.250 ******** 2026-01-05 01:21:01.706328 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:01.706341 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:01.706356 | orchestrator | ok: [testbed-manager] 2026-01-05 01:21:01.706370 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:01.706383 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:21:01.706397 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:21:01.706411 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:21:01.706425 | orchestrator | 2026-01-05 01:21:01.706439 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 01:21:01.706453 | orchestrator | 2026-01-05 01:21:01.706525 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 01:21:01.706541 | orchestrator | Monday 05 January 2026 01:21:00 +0000 (0:00:05.979) 0:00:09.230 ******** 2026-01-05 01:21:01.706555 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:21:01.706570 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:01.706585 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:21:01.706601 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:21:01.706616 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:21:01.706632 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:21:01.706647 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:21:01.706663 | orchestrator | 2026-01-05 01:21:01.706678 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:21:01.706693 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:21:01.706725 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:21:01.706741 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:21:01.706758 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:21:01.706773 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:21:01.706788 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:21:01.706804 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:21:01.706820 | orchestrator | 2026-01-05 01:21:01.706836 | orchestrator | 2026-01-05 01:21:01.706849 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:21:01.706865 | orchestrator | Monday 05 January 2026 01:21:01 +0000 (0:00:00.592) 0:00:09.823 ******** 2026-01-05 01:21:01.706880 | orchestrator | =============================================================================== 2026-01-05 01:21:01.706896 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.98s 2026-01-05 01:21:01.706911 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.63s 2026-01-05 01:21:01.706927 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2026-01-05 01:21:01.706942 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-01-05 01:21:02.062671 | orchestrator | + osism validate ceph-mons 2026-01-05 01:21:35.946268 | orchestrator | 2026-01-05 01:21:35.946407 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-01-05 01:21:35.946426 | orchestrator | 2026-01-05 01:21:35.946440 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-05 01:21:35.946453 | orchestrator | Monday 05 January 2026 01:21:19 +0000 (0:00:00.474) 0:00:00.474 ******** 2026-01-05 01:21:35.946466 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:21:35.946476 | orchestrator | 2026-01-05 01:21:35.946487 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-05 01:21:35.946498 | orchestrator | Monday 05 January 2026 01:21:20 +0000 (0:00:01.866) 0:00:02.340 ******** 2026-01-05 01:21:35.946509 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:21:35.946521 | orchestrator | 2026-01-05 01:21:35.946533 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-05 01:21:35.946545 | orchestrator | Monday 05 January 2026 01:21:21 +0000 (0:00:01.032) 0:00:03.373 ******** 2026-01-05 01:21:35.946557 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.946599 | orchestrator | 2026-01-05 01:21:35.946613 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-05 01:21:35.946625 | orchestrator | Monday 05 January 2026 01:21:22 +0000 (0:00:00.163) 0:00:03.536 ******** 2026-01-05 01:21:35.946637 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.946648 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:35.946661 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:35.946673 | orchestrator | 2026-01-05 01:21:35.946686 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-05 01:21:35.946699 | orchestrator | Monday 05 January 2026 01:21:22 +0000 (0:00:00.299) 0:00:03.836 ******** 2026-01-05 01:21:35.946711 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.946722 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:35.946734 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:35.946745 | orchestrator | 2026-01-05 01:21:35.946758 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-05 01:21:35.946772 | orchestrator | Monday 05 January 2026 01:21:23 +0000 (0:00:00.984) 0:00:04.821 ******** 2026-01-05 01:21:35.946789 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.946804 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:21:35.946818 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:21:35.946831 | orchestrator | 2026-01-05 01:21:35.946846 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-05 01:21:35.946859 | orchestrator | Monday 05 January 2026 01:21:23 +0000 (0:00:00.287) 0:00:05.108 ******** 2026-01-05 01:21:35.946873 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.946887 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:35.946900 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:35.946913 | orchestrator | 2026-01-05 01:21:35.946926 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 01:21:35.946939 | orchestrator | Monday 05 January 2026 01:21:24 +0000 (0:00:00.495) 0:00:05.603 ******** 2026-01-05 01:21:35.946953 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.946966 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:35.946979 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:35.946992 | orchestrator | 2026-01-05 01:21:35.947005 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-01-05 01:21:35.947018 | orchestrator | Monday 05 January 2026 01:21:24 +0000 (0:00:00.338) 0:00:05.942 ******** 2026-01-05 01:21:35.947032 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.947045 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:21:35.947059 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:21:35.947073 | orchestrator | 2026-01-05 01:21:35.947086 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-01-05 01:21:35.947100 | orchestrator | Monday 05 January 2026 01:21:24 +0000 (0:00:00.304) 0:00:06.246 ******** 2026-01-05 01:21:35.947113 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.947125 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:35.947138 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:35.947151 | orchestrator | 2026-01-05 01:21:35.947165 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 01:21:35.947177 | orchestrator | Monday 05 January 2026 01:21:25 +0000 (0:00:00.527) 0:00:06.774 ******** 2026-01-05 01:21:35.947190 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.947288 | orchestrator | 2026-01-05 01:21:35.947304 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 01:21:35.947318 | orchestrator | Monday 05 January 2026 01:21:25 +0000 (0:00:00.273) 0:00:07.047 ******** 2026-01-05 01:21:35.947331 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.947344 | orchestrator | 2026-01-05 01:21:35.947356 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 01:21:35.947369 | orchestrator | Monday 05 January 2026 01:21:25 +0000 (0:00:00.270) 0:00:07.318 ******** 2026-01-05 01:21:35.947382 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.947395 | orchestrator | 2026-01-05 01:21:35.947408 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:21:35.947434 | orchestrator | Monday 05 January 2026 01:21:26 +0000 (0:00:00.277) 0:00:07.595 ******** 2026-01-05 01:21:35.947447 | orchestrator | 2026-01-05 01:21:35.947479 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:21:35.947491 | orchestrator | Monday 05 January 2026 01:21:26 +0000 (0:00:00.075) 0:00:07.671 ******** 2026-01-05 01:21:35.947503 | orchestrator | 2026-01-05 01:21:35.947515 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:21:35.947525 | orchestrator | Monday 05 January 2026 01:21:26 +0000 (0:00:00.077) 0:00:07.748 ******** 2026-01-05 01:21:35.947536 | orchestrator | 2026-01-05 01:21:35.947548 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 01:21:35.947562 | orchestrator | Monday 05 January 2026 01:21:26 +0000 (0:00:00.080) 0:00:07.828 ******** 2026-01-05 01:21:35.947575 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.947588 | orchestrator | 2026-01-05 01:21:35.947600 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-05 01:21:35.947613 | orchestrator | Monday 05 January 2026 01:21:26 +0000 (0:00:00.253) 0:00:08.082 ******** 2026-01-05 01:21:35.947625 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.947637 | orchestrator | 2026-01-05 01:21:35.947670 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-01-05 01:21:35.947684 | orchestrator | Monday 05 January 2026 01:21:26 +0000 (0:00:00.257) 0:00:08.340 ******** 2026-01-05 01:21:35.947695 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.947706 | orchestrator | 2026-01-05 01:21:35.947717 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-01-05 01:21:35.947728 | orchestrator | Monday 05 January 2026 01:21:27 +0000 (0:00:00.139) 0:00:08.479 ******** 2026-01-05 01:21:35.947739 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:21:35.947751 | orchestrator | 2026-01-05 01:21:35.947762 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-01-05 01:21:35.947774 | orchestrator | Monday 05 January 2026 01:21:28 +0000 (0:00:01.501) 0:00:09.981 ******** 2026-01-05 01:21:35.947784 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.947795 | orchestrator | 2026-01-05 01:21:35.947806 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-01-05 01:21:35.947817 | orchestrator | Monday 05 January 2026 01:21:29 +0000 (0:00:00.550) 0:00:10.531 ******** 2026-01-05 01:21:35.947829 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.947840 | orchestrator | 2026-01-05 01:21:35.947851 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-01-05 01:21:35.947863 | orchestrator | Monday 05 January 2026 01:21:29 +0000 (0:00:00.128) 0:00:10.659 ******** 2026-01-05 01:21:35.947874 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.947885 | orchestrator | 2026-01-05 01:21:35.947896 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-01-05 01:21:35.947906 | orchestrator | Monday 05 January 2026 01:21:29 +0000 (0:00:00.324) 0:00:10.984 ******** 2026-01-05 01:21:35.947917 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.947929 | orchestrator | 2026-01-05 01:21:35.947940 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-01-05 01:21:35.947952 | orchestrator | Monday 05 January 2026 01:21:29 +0000 (0:00:00.306) 0:00:11.290 ******** 2026-01-05 01:21:35.947963 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.947974 | orchestrator | 2026-01-05 01:21:35.947985 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-01-05 01:21:35.947996 | orchestrator | Monday 05 January 2026 01:21:30 +0000 (0:00:00.134) 0:00:11.425 ******** 2026-01-05 01:21:35.948007 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.948017 | orchestrator | 2026-01-05 01:21:35.948028 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-01-05 01:21:35.948039 | orchestrator | Monday 05 January 2026 01:21:30 +0000 (0:00:00.158) 0:00:11.584 ******** 2026-01-05 01:21:35.948050 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.948072 | orchestrator | 2026-01-05 01:21:35.948084 | orchestrator | TASK [Gather status data] ****************************************************** 2026-01-05 01:21:35.948096 | orchestrator | Monday 05 January 2026 01:21:30 +0000 (0:00:00.124) 0:00:11.709 ******** 2026-01-05 01:21:35.948107 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:21:35.948118 | orchestrator | 2026-01-05 01:21:35.948196 | orchestrator | TASK [Set health test data] **************************************************** 2026-01-05 01:21:35.948287 | orchestrator | Monday 05 January 2026 01:21:31 +0000 (0:00:01.338) 0:00:13.047 ******** 2026-01-05 01:21:35.948300 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.948312 | orchestrator | 2026-01-05 01:21:35.948323 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-01-05 01:21:35.948334 | orchestrator | Monday 05 January 2026 01:21:31 +0000 (0:00:00.325) 0:00:13.372 ******** 2026-01-05 01:21:35.948376 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.948388 | orchestrator | 2026-01-05 01:21:35.948399 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-01-05 01:21:35.948410 | orchestrator | Monday 05 January 2026 01:21:32 +0000 (0:00:00.155) 0:00:13.528 ******** 2026-01-05 01:21:35.948421 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:35.948432 | orchestrator | 2026-01-05 01:21:35.948453 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-01-05 01:21:35.948464 | orchestrator | Monday 05 January 2026 01:21:32 +0000 (0:00:00.150) 0:00:13.679 ******** 2026-01-05 01:21:35.948474 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.948484 | orchestrator | 2026-01-05 01:21:35.948493 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-01-05 01:21:35.948505 | orchestrator | Monday 05 January 2026 01:21:32 +0000 (0:00:00.147) 0:00:13.826 ******** 2026-01-05 01:21:35.948515 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.948526 | orchestrator | 2026-01-05 01:21:35.948535 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-05 01:21:35.948546 | orchestrator | Monday 05 January 2026 01:21:32 +0000 (0:00:00.367) 0:00:14.194 ******** 2026-01-05 01:21:35.948557 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:21:35.948568 | orchestrator | 2026-01-05 01:21:35.948579 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-05 01:21:35.948591 | orchestrator | Monday 05 January 2026 01:21:33 +0000 (0:00:00.256) 0:00:14.450 ******** 2026-01-05 01:21:35.948602 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:35.948613 | orchestrator | 2026-01-05 01:21:35.948624 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 01:21:35.948635 | orchestrator | Monday 05 January 2026 01:21:33 +0000 (0:00:00.269) 0:00:14.720 ******** 2026-01-05 01:21:35.948646 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:21:35.948658 | orchestrator | 2026-01-05 01:21:35.948670 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 01:21:35.948687 | orchestrator | Monday 05 January 2026 01:21:35 +0000 (0:00:01.779) 0:00:16.500 ******** 2026-01-05 01:21:35.948697 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:21:35.948738 | orchestrator | 2026-01-05 01:21:35.948752 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 01:21:35.948764 | orchestrator | Monday 05 January 2026 01:21:35 +0000 (0:00:00.320) 0:00:16.820 ******** 2026-01-05 01:21:35.948775 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:21:35.948786 | orchestrator | 2026-01-05 01:21:35.948809 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:21:38.701881 | orchestrator | Monday 05 January 2026 01:21:35 +0000 (0:00:00.275) 0:00:17.096 ******** 2026-01-05 01:21:38.701958 | orchestrator | 2026-01-05 01:21:38.701968 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:21:38.701975 | orchestrator | Monday 05 January 2026 01:21:35 +0000 (0:00:00.086) 0:00:17.183 ******** 2026-01-05 01:21:38.702005 | orchestrator | 2026-01-05 01:21:38.702097 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:21:38.702103 | orchestrator | Monday 05 January 2026 01:21:35 +0000 (0:00:00.072) 0:00:17.256 ******** 2026-01-05 01:21:38.702107 | orchestrator | 2026-01-05 01:21:38.702111 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-05 01:21:38.702115 | orchestrator | Monday 05 January 2026 01:21:35 +0000 (0:00:00.074) 0:00:17.331 ******** 2026-01-05 01:21:38.702119 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:21:38.702123 | orchestrator | 2026-01-05 01:21:38.702127 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 01:21:38.702131 | orchestrator | Monday 05 January 2026 01:21:37 +0000 (0:00:01.549) 0:00:18.880 ******** 2026-01-05 01:21:38.702135 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-05 01:21:38.702139 | orchestrator |  "msg": [ 2026-01-05 01:21:38.702143 | orchestrator |  "Validator run completed.", 2026-01-05 01:21:38.702148 | orchestrator |  "You can find the report file here:", 2026-01-05 01:21:38.702152 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-01-05T01:21:19+00:00-report.json", 2026-01-05 01:21:38.702156 | orchestrator |  "on the following host:", 2026-01-05 01:21:38.702160 | orchestrator |  "testbed-manager" 2026-01-05 01:21:38.702164 | orchestrator |  ] 2026-01-05 01:21:38.702168 | orchestrator | } 2026-01-05 01:21:38.702172 | orchestrator | 2026-01-05 01:21:38.702175 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:21:38.702180 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-05 01:21:38.702184 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:21:38.702189 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:21:38.702192 | orchestrator | 2026-01-05 01:21:38.702196 | orchestrator | 2026-01-05 01:21:38.702200 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:21:38.702204 | orchestrator | Monday 05 January 2026 01:21:38 +0000 (0:00:00.866) 0:00:19.747 ******** 2026-01-05 01:21:38.702207 | orchestrator | =============================================================================== 2026-01-05 01:21:38.702211 | orchestrator | Get timestamp for report file ------------------------------------------- 1.87s 2026-01-05 01:21:38.702262 | orchestrator | Aggregate test results step one ----------------------------------------- 1.78s 2026-01-05 01:21:38.702267 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2026-01-05 01:21:38.702273 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.50s 2026-01-05 01:21:38.702279 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2026-01-05 01:21:38.702289 | orchestrator | Create report output directory ------------------------------------------ 1.03s 2026-01-05 01:21:38.702297 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2026-01-05 01:21:38.702303 | orchestrator | Print report file information ------------------------------------------- 0.87s 2026-01-05 01:21:38.702309 | orchestrator | Set quorum test data ---------------------------------------------------- 0.55s 2026-01-05 01:21:38.702314 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.53s 2026-01-05 01:21:38.702320 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2026-01-05 01:21:38.702326 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.37s 2026-01-05 01:21:38.702331 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-01-05 01:21:38.702338 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2026-01-05 01:21:38.702352 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-01-05 01:21:38.702358 | orchestrator | Aggregate test results step two ----------------------------------------- 0.32s 2026-01-05 01:21:38.702363 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-01-05 01:21:38.702369 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-01-05 01:21:38.702375 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-01-05 01:21:38.702380 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-01-05 01:21:39.044928 | orchestrator | + osism validate ceph-mgrs 2026-01-05 01:22:11.006951 | orchestrator | 2026-01-05 01:22:11.007100 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-01-05 01:22:11.007129 | orchestrator | 2026-01-05 01:22:11.007145 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-05 01:22:11.007163 | orchestrator | Monday 05 January 2026 01:21:55 +0000 (0:00:00.438) 0:00:00.438 ******** 2026-01-05 01:22:11.007181 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:11.007196 | orchestrator | 2026-01-05 01:22:11.007212 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-05 01:22:11.007228 | orchestrator | Monday 05 January 2026 01:21:56 +0000 (0:00:00.867) 0:00:01.306 ******** 2026-01-05 01:22:11.007245 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:11.007262 | orchestrator | 2026-01-05 01:22:11.007278 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-05 01:22:11.007374 | orchestrator | Monday 05 January 2026 01:21:57 +0000 (0:00:01.174) 0:00:02.481 ******** 2026-01-05 01:22:11.007395 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.007447 | orchestrator | 2026-01-05 01:22:11.007465 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-05 01:22:11.007482 | orchestrator | Monday 05 January 2026 01:21:58 +0000 (0:00:00.174) 0:00:02.656 ******** 2026-01-05 01:22:11.007500 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.007517 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:22:11.007537 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:22:11.007554 | orchestrator | 2026-01-05 01:22:11.007572 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-05 01:22:11.007591 | orchestrator | Monday 05 January 2026 01:21:58 +0000 (0:00:00.312) 0:00:02.968 ******** 2026-01-05 01:22:11.007608 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:22:11.007626 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:22:11.007642 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.007662 | orchestrator | 2026-01-05 01:22:11.007681 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-05 01:22:11.007699 | orchestrator | Monday 05 January 2026 01:21:59 +0000 (0:00:00.965) 0:00:03.934 ******** 2026-01-05 01:22:11.007716 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:22:11.007733 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:22:11.007750 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:22:11.007766 | orchestrator | 2026-01-05 01:22:11.007782 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-05 01:22:11.007800 | orchestrator | Monday 05 January 2026 01:21:59 +0000 (0:00:00.296) 0:00:04.230 ******** 2026-01-05 01:22:11.007816 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.007833 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:22:11.007850 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:22:11.007867 | orchestrator | 2026-01-05 01:22:11.007883 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 01:22:11.007900 | orchestrator | Monday 05 January 2026 01:22:00 +0000 (0:00:00.515) 0:00:04.746 ******** 2026-01-05 01:22:11.007919 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.007935 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:22:11.007951 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:22:11.008007 | orchestrator | 2026-01-05 01:22:11.008027 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-01-05 01:22:11.008045 | orchestrator | Monday 05 January 2026 01:22:00 +0000 (0:00:00.339) 0:00:05.086 ******** 2026-01-05 01:22:11.008062 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:22:11.008079 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:22:11.008095 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:22:11.008111 | orchestrator | 2026-01-05 01:22:11.008128 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-01-05 01:22:11.008144 | orchestrator | Monday 05 January 2026 01:22:00 +0000 (0:00:00.284) 0:00:05.371 ******** 2026-01-05 01:22:11.008160 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.008177 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:22:11.008215 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:22:11.008231 | orchestrator | 2026-01-05 01:22:11.008248 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 01:22:11.008265 | orchestrator | Monday 05 January 2026 01:22:01 +0000 (0:00:00.528) 0:00:05.899 ******** 2026-01-05 01:22:11.008281 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:22:11.008297 | orchestrator | 2026-01-05 01:22:11.008314 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 01:22:11.008330 | orchestrator | Monday 05 January 2026 01:22:01 +0000 (0:00:00.293) 0:00:06.193 ******** 2026-01-05 01:22:11.008347 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:22:11.008363 | orchestrator | 2026-01-05 01:22:11.008385 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 01:22:11.008431 | orchestrator | Monday 05 January 2026 01:22:01 +0000 (0:00:00.277) 0:00:06.471 ******** 2026-01-05 01:22:11.008450 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:22:11.008466 | orchestrator | 2026-01-05 01:22:11.008483 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:11.008499 | orchestrator | Monday 05 January 2026 01:22:02 +0000 (0:00:00.260) 0:00:06.732 ******** 2026-01-05 01:22:11.008515 | orchestrator | 2026-01-05 01:22:11.008532 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:11.008548 | orchestrator | Monday 05 January 2026 01:22:02 +0000 (0:00:00.073) 0:00:06.805 ******** 2026-01-05 01:22:11.008564 | orchestrator | 2026-01-05 01:22:11.008581 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:11.008597 | orchestrator | Monday 05 January 2026 01:22:02 +0000 (0:00:00.072) 0:00:06.878 ******** 2026-01-05 01:22:11.008613 | orchestrator | 2026-01-05 01:22:11.008629 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 01:22:11.008646 | orchestrator | Monday 05 January 2026 01:22:02 +0000 (0:00:00.075) 0:00:06.953 ******** 2026-01-05 01:22:11.008662 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:22:11.008678 | orchestrator | 2026-01-05 01:22:11.008695 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-05 01:22:11.008711 | orchestrator | Monday 05 January 2026 01:22:02 +0000 (0:00:00.276) 0:00:07.229 ******** 2026-01-05 01:22:11.008727 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:22:11.008743 | orchestrator | 2026-01-05 01:22:11.008792 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-01-05 01:22:11.008809 | orchestrator | Monday 05 January 2026 01:22:02 +0000 (0:00:00.251) 0:00:07.481 ******** 2026-01-05 01:22:11.008825 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.008841 | orchestrator | 2026-01-05 01:22:11.008857 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-01-05 01:22:11.008873 | orchestrator | Monday 05 January 2026 01:22:03 +0000 (0:00:00.138) 0:00:07.620 ******** 2026-01-05 01:22:11.008890 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:22:11.008906 | orchestrator | 2026-01-05 01:22:11.008922 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-01-05 01:22:11.008938 | orchestrator | Monday 05 January 2026 01:22:05 +0000 (0:00:02.021) 0:00:09.641 ******** 2026-01-05 01:22:11.008968 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.008984 | orchestrator | 2026-01-05 01:22:11.009000 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-01-05 01:22:11.009015 | orchestrator | Monday 05 January 2026 01:22:05 +0000 (0:00:00.541) 0:00:10.182 ******** 2026-01-05 01:22:11.009031 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.009048 | orchestrator | 2026-01-05 01:22:11.009065 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-01-05 01:22:11.009082 | orchestrator | Monday 05 January 2026 01:22:06 +0000 (0:00:00.370) 0:00:10.553 ******** 2026-01-05 01:22:11.009099 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:22:11.009116 | orchestrator | 2026-01-05 01:22:11.009134 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-01-05 01:22:11.009150 | orchestrator | Monday 05 January 2026 01:22:06 +0000 (0:00:00.174) 0:00:10.728 ******** 2026-01-05 01:22:11.009168 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:22:11.009186 | orchestrator | 2026-01-05 01:22:11.009203 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-05 01:22:11.009221 | orchestrator | Monday 05 January 2026 01:22:06 +0000 (0:00:00.157) 0:00:10.885 ******** 2026-01-05 01:22:11.009239 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:11.009257 | orchestrator | 2026-01-05 01:22:11.009275 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-05 01:22:11.009293 | orchestrator | Monday 05 January 2026 01:22:06 +0000 (0:00:00.278) 0:00:11.163 ******** 2026-01-05 01:22:11.009311 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:22:11.009329 | orchestrator | 2026-01-05 01:22:11.009346 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 01:22:11.009364 | orchestrator | Monday 05 January 2026 01:22:06 +0000 (0:00:00.246) 0:00:11.410 ******** 2026-01-05 01:22:11.009382 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:11.009421 | orchestrator | 2026-01-05 01:22:11.009441 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 01:22:11.009486 | orchestrator | Monday 05 January 2026 01:22:08 +0000 (0:00:01.304) 0:00:12.715 ******** 2026-01-05 01:22:11.009503 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:11.009521 | orchestrator | 2026-01-05 01:22:11.009539 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 01:22:11.009556 | orchestrator | Monday 05 January 2026 01:22:08 +0000 (0:00:00.266) 0:00:12.982 ******** 2026-01-05 01:22:11.009573 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:11.009589 | orchestrator | 2026-01-05 01:22:11.009604 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:11.009622 | orchestrator | Monday 05 January 2026 01:22:08 +0000 (0:00:00.254) 0:00:13.237 ******** 2026-01-05 01:22:11.009639 | orchestrator | 2026-01-05 01:22:11.009657 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:11.009675 | orchestrator | Monday 05 January 2026 01:22:08 +0000 (0:00:00.077) 0:00:13.314 ******** 2026-01-05 01:22:11.009693 | orchestrator | 2026-01-05 01:22:11.009710 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:11.009728 | orchestrator | Monday 05 January 2026 01:22:08 +0000 (0:00:00.095) 0:00:13.409 ******** 2026-01-05 01:22:11.009746 | orchestrator | 2026-01-05 01:22:11.009764 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-05 01:22:11.009781 | orchestrator | Monday 05 January 2026 01:22:09 +0000 (0:00:00.290) 0:00:13.700 ******** 2026-01-05 01:22:11.009796 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:11.009812 | orchestrator | 2026-01-05 01:22:11.009837 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 01:22:11.009855 | orchestrator | Monday 05 January 2026 01:22:10 +0000 (0:00:01.389) 0:00:15.090 ******** 2026-01-05 01:22:11.009873 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-05 01:22:11.009920 | orchestrator |  "msg": [ 2026-01-05 01:22:11.009940 | orchestrator |  "Validator run completed.", 2026-01-05 01:22:11.009957 | orchestrator |  "You can find the report file here:", 2026-01-05 01:22:11.009975 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-01-05T01:21:56+00:00-report.json", 2026-01-05 01:22:11.009994 | orchestrator |  "on the following host:", 2026-01-05 01:22:11.010012 | orchestrator |  "testbed-manager" 2026-01-05 01:22:11.010118 | orchestrator |  ] 2026-01-05 01:22:11.010187 | orchestrator | } 2026-01-05 01:22:11.010208 | orchestrator | 2026-01-05 01:22:11.010226 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:22:11.010246 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 01:22:11.010265 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:22:11.010299 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:22:11.360384 | orchestrator | 2026-01-05 01:22:11.360519 | orchestrator | 2026-01-05 01:22:11.360529 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:22:11.360537 | orchestrator | Monday 05 January 2026 01:22:10 +0000 (0:00:00.410) 0:00:15.501 ******** 2026-01-05 01:22:11.360545 | orchestrator | =============================================================================== 2026-01-05 01:22:11.360553 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.02s 2026-01-05 01:22:11.360567 | orchestrator | Write report file ------------------------------------------------------- 1.39s 2026-01-05 01:22:11.360576 | orchestrator | Aggregate test results step one ----------------------------------------- 1.30s 2026-01-05 01:22:11.360584 | orchestrator | Create report output directory ------------------------------------------ 1.17s 2026-01-05 01:22:11.360591 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2026-01-05 01:22:11.360599 | orchestrator | Get timestamp for report file ------------------------------------------- 0.87s 2026-01-05 01:22:11.360607 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.54s 2026-01-05 01:22:11.360619 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.53s 2026-01-05 01:22:11.360629 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-01-05 01:22:11.360637 | orchestrator | Flush handlers ---------------------------------------------------------- 0.46s 2026-01-05 01:22:11.360645 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-01-05 01:22:11.360653 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.37s 2026-01-05 01:22:11.360662 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-01-05 01:22:11.360670 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-01-05 01:22:11.360679 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-01-05 01:22:11.360688 | orchestrator | Aggregate test results step one ----------------------------------------- 0.29s 2026-01-05 01:22:11.360697 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2026-01-05 01:22:11.360706 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-01-05 01:22:11.360715 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-01-05 01:22:11.360722 | orchestrator | Print report file information ------------------------------------------- 0.28s 2026-01-05 01:22:11.700748 | orchestrator | + osism validate ceph-osds 2026-01-05 01:22:33.292503 | orchestrator | 2026-01-05 01:22:33.293435 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-01-05 01:22:33.293471 | orchestrator | 2026-01-05 01:22:33.293503 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-05 01:22:33.293511 | orchestrator | Monday 05 January 2026 01:22:28 +0000 (0:00:00.450) 0:00:00.450 ******** 2026-01-05 01:22:33.293519 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:33.293549 | orchestrator | 2026-01-05 01:22:33.293556 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 01:22:33.293562 | orchestrator | Monday 05 January 2026 01:22:29 +0000 (0:00:00.852) 0:00:01.302 ******** 2026-01-05 01:22:33.293568 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:33.293575 | orchestrator | 2026-01-05 01:22:33.293580 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-05 01:22:33.293586 | orchestrator | Monday 05 January 2026 01:22:29 +0000 (0:00:00.523) 0:00:01.826 ******** 2026-01-05 01:22:33.293592 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:33.293598 | orchestrator | 2026-01-05 01:22:33.293604 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-05 01:22:33.293610 | orchestrator | Monday 05 January 2026 01:22:30 +0000 (0:00:00.744) 0:00:02.571 ******** 2026-01-05 01:22:33.293615 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:33.293621 | orchestrator | 2026-01-05 01:22:33.293627 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-05 01:22:33.293633 | orchestrator | Monday 05 January 2026 01:22:30 +0000 (0:00:00.151) 0:00:02.722 ******** 2026-01-05 01:22:33.293639 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:33.293645 | orchestrator | 2026-01-05 01:22:33.293651 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-05 01:22:33.293657 | orchestrator | Monday 05 January 2026 01:22:31 +0000 (0:00:00.154) 0:00:02.877 ******** 2026-01-05 01:22:33.293663 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:33.293668 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:33.293675 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:33.293680 | orchestrator | 2026-01-05 01:22:33.293687 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-05 01:22:33.293693 | orchestrator | Monday 05 January 2026 01:22:31 +0000 (0:00:00.331) 0:00:03.208 ******** 2026-01-05 01:22:33.293699 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:33.293705 | orchestrator | 2026-01-05 01:22:33.293712 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-05 01:22:33.293718 | orchestrator | Monday 05 January 2026 01:22:31 +0000 (0:00:00.160) 0:00:03.369 ******** 2026-01-05 01:22:33.293724 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:33.293730 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:33.293736 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:33.293743 | orchestrator | 2026-01-05 01:22:33.293749 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-01-05 01:22:33.293755 | orchestrator | Monday 05 January 2026 01:22:31 +0000 (0:00:00.340) 0:00:03.709 ******** 2026-01-05 01:22:33.293761 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:33.293768 | orchestrator | 2026-01-05 01:22:33.293774 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 01:22:33.293780 | orchestrator | Monday 05 January 2026 01:22:32 +0000 (0:00:00.592) 0:00:04.302 ******** 2026-01-05 01:22:33.293786 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:33.293792 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:33.293798 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:33.293804 | orchestrator | 2026-01-05 01:22:33.293810 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-01-05 01:22:33.293817 | orchestrator | Monday 05 January 2026 01:22:32 +0000 (0:00:00.491) 0:00:04.794 ******** 2026-01-05 01:22:33.293826 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9c20486659d4295d6674578c7b7f9f37b3ef648e36f657e9286dcd6806a35de9', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-01-05 01:22:33.293843 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a7e24ffb8a3fb3497a57ee821025acbad9181f52c2587f9af1921ef0d87e51cf', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-01-05 01:22:33.293849 | orchestrator | skipping: [testbed-node-3] => (item={'id': '71a59d78a8ba59da4792bd04c4a7e5a8a4b6a7c275b48a21ef3a629ba13d0e94', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-01-05 01:22:33.293857 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7384648859ae600c45199a4aeb31700254ab3569aa8b80b1e6f9891430ceb6de', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-01-05 01:22:33.293864 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e058f2e4525ca29ed34b82ec11efb94d6c9b53170343f72a5ed3521c79c9e100', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-01-05 01:22:33.293894 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b0def05e260213f0128e1d3eced8f3438f2cbce5631c631cfbc1cf257b6c6e91', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-01-05 01:22:33.293901 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1c22129118cfbe342941c539f8edf59dd0bb382bac80faf3b07cd881f0673417', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-01-05 01:22:33.293907 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4b13204f9a53bdd8783b9e5464e50f511004acb63c982c0a602ac29ed0167da3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-05 01:22:33.293930 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b2da3c2c780aa9da2d408193f46513b35a8604d990b35b336120870bb5b3616a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2026-01-05 01:22:33.293942 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1d59e9ecf365094b49886f137e72a31b28b54f9e1443786e91757492d77e59b7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2026-01-05 01:22:33.293951 | orchestrator | ok: [testbed-node-3] => (item={'id': 'bad8778168619f6d21d4cb98e4de1571a41b4a6d2acb912b5b13cfd03ed61727', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-01-05 01:22:33.293957 | orchestrator | ok: [testbed-node-3] => (item={'id': '60fe152334547f3606317782f0c7762571d2a038a5e4a06917ddeabb4cd6c077', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-01-05 01:22:33.293963 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8f9e2d079b8c1815840c6ecfbffc894fbf3d478f3fa263385b9221d356554997', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-05 01:22:33.293970 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6a36ff4225ad725e1cb5a07099eeecc9409201bdc1dc842eec10976bd2fa337d', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-01-05 01:22:33.293976 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3077d0dee4a22007aec03269679f685b35f0be60d9f053f0e2f467f76d58228e', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-01-05 01:22:33.293988 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0135218e66e9e5a8b90dcd1b114532da1f6d0c9722f8c79822a01d7d5ee95112', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-05 01:22:33.293994 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bed2b04dcd7e4772bd7997822578226b82a180e4297e12a085fa573f95853908', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-05 01:22:33.294000 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fb174af2abe0160790d33692c532c57ca0992f7696720d68b6a3dbc23869b98b', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-01-05 01:22:33.294007 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3263ebb4605eadcfbd87b5b7cf0afc955f61dd48fe6fc7891b085dfe1275c90c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-01-05 01:22:33.294054 | orchestrator | skipping: [testbed-node-4] => (item={'id': '04d070d4b83fb48cb7ed98d82c0bf37dba84c2a8fd72ee00ecfd47fa5fb1cd01', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-01-05 01:22:33.294070 | orchestrator | skipping: [testbed-node-4] => (item={'id': '75a17696014fd90bffe4063f6c1d99cef3c4c320f9b828b890df65ad6472f5c6', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-01-05 01:22:33.545119 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b1836a55a9a352473e412a52eefee8946df0a2aa9d13fdda184509f1623ab4a8', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-01-05 01:22:33.545223 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4cec9d98fbf074ce00b44b86f9258e019c00c85d21114b4783cf2c82917af33d', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-01-05 01:22:33.545239 | orchestrator | skipping: [testbed-node-4] => (item={'id': '081ad2ecfe36b97215e1940baefca35fe2c75eb66a3927d46fb48e0aeb5ca357', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-01-05 01:22:33.545260 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f13ca1239ac22f9a655856946d5998fa2383ffb075df0bbb7d3f6ba738fd1837', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-01-05 01:22:33.545267 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ee3bbb7bcb70e8bbb9bf4cfec8ce004efbd3fa6955620de8d39bc78e5721f6f5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-05 01:22:33.545274 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f00772cb6ff3c1df329bb01fba4e53c4d380e48eeb2092c47fd8f270e8aa4f0b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2026-01-05 01:22:33.545280 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5f817a455b2d9203544ad31fc1635fbb362cbc830f512e48fc8a148fb1c18dce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2026-01-05 01:22:33.545305 | orchestrator | ok: [testbed-node-4] => (item={'id': '4b79f743368845c13816860735778bb93138f21a947e0e3c8f67545c099c7d07', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-01-05 01:22:33.545311 | orchestrator | ok: [testbed-node-4] => (item={'id': '0d76ad2139251f80f78c38bfb8f993aa9dd443cbf6f93f7e8c653fc93b96a5dd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-01-05 01:22:33.545317 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bcc8e5b49e657dc3a59277f05c5496d81d2916150e83ffe7fb32920ea4ac793f', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-05 01:22:33.545322 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0f184453c9709516d561564e46d723851de7f9498c6d8c82f2e793ace4c9c6c9', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-01-05 01:22:33.545328 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9ad7f8caa987b107be3ea725d09915d0884d25af1425f2160817705cf7e3bb26', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-01-05 01:22:33.545334 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9715fa43abac83b4a9372bccd2dbb339e3c5cdad2063ecabe028291f7eb90a1e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-05 01:22:33.545340 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8f6f8419ae4e017051b4457b6cb3010caf96026ed1d57f1edf428764cb951fc2', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-05 01:22:33.545345 | orchestrator | skipping: [testbed-node-4] => (item={'id': '949597dfa2325e03ca7a841e6efda6665c15461c1ea784fdddc726e77215699d', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-01-05 01:22:33.545363 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25188729cbae43ce3db79b5b96809414158544d5af8ace1a532f6f57a3c09013', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-01-05 01:22:33.545369 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3f4ec27bfe1ed3bd161bec703ae4ea630ee8b387cd9dd0917b458cc3aaf0b75d', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-01-05 01:22:33.545374 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9f50b532aa777630a77e76e82fc64fdc3905e49205cb9b5ddf0796b3da0a751a', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-01-05 01:22:33.545383 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2d470c18397029996eeb2b2917843389d5af04c021263d8c052bd41f013877fa', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2026-01-05 01:22:33.545389 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f3a03708aad635d6b32400a526affb2b170299167f766f5d72378ac29f31f8b5', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-01-05 01:22:33.545430 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3dfd6dfa3df228509306a93ba4f4f0a964880a5785331b3420385b21b153ba90', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-01-05 01:22:33.545450 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b5304be6139039e48c2186788c0c1a58324ecb02c8486ce18c856ea24979adbc', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-01-05 01:22:33.545458 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4816576113c34aa17bccde42fb999fa463969e1cec47d0fe05075ef414fa8646', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-05 01:22:33.545467 | orchestrator | skipping: [testbed-node-5] => (item={'id': '443ab0e3084c4fa671050bf1f3c02bb42364e4a20d4d57ede4ca43553ba3d110', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2026-01-05 01:22:33.545476 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7c1cbed3d09223616fd1e047c5e5fa53847b6465e3449fbe92a40502bc6c5f4a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2026-01-05 01:22:33.545485 | orchestrator | ok: [testbed-node-5] => (item={'id': '61c2518b02157e7004a1187ba4f168c2b2ee7e507bb81dbc601802ed64fea67f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-01-05 01:22:33.545493 | orchestrator | ok: [testbed-node-5] => (item={'id': 'e6de6feb4874b61476779c31e3a2fa9f2db80741167b857301b450c68f277fec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-01-05 01:22:33.545501 | orchestrator | skipping: [testbed-node-5] => (item={'id': '639c83716f24f729e8546c0077368d134e4abbf8d34b0e6c6966e2dbdc4175a8', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-05 01:22:33.545507 | orchestrator | skipping: [testbed-node-5] => (item={'id': '41a4383652f8eaa22ed5119b3ee51b5673e20c1d8da7c43abaf93b063ca59c97', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-01-05 01:22:33.545519 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b64a279c3ebe8ca8f7e84dd7da3c6d7088e02479904f0ba3da95995c367aad8a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-01-05 01:22:46.462448 | orchestrator | skipping: [testbed-node-5] => (item={'id': '142e8b49dd066cfa0890e1cc7d3f9c8bad0f548d08847ccca6eaff6cc9edd67c', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-05 01:22:46.462564 | orchestrator | skipping: [testbed-node-5] => (item={'id': '90148d15a3009053ada79dcf1fcc73913adb73d69b688b01befd445a494e23e1', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-05 01:22:46.462582 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f05df0646c8f51367cc359931ccd12966dc513c009b9ed941d044a4186cb55c1', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-01-05 01:22:46.462669 | orchestrator | 2026-01-05 01:22:46.462682 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-01-05 01:22:46.462695 | orchestrator | Monday 05 January 2026 01:22:33 +0000 (0:00:00.565) 0:00:05.359 ******** 2026-01-05 01:22:46.462722 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.462731 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:46.462737 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:46.462744 | orchestrator | 2026-01-05 01:22:46.462750 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-01-05 01:22:46.462757 | orchestrator | Monday 05 January 2026 01:22:33 +0000 (0:00:00.316) 0:00:05.676 ******** 2026-01-05 01:22:46.462763 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.462771 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:46.462777 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:46.462783 | orchestrator | 2026-01-05 01:22:46.462790 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-01-05 01:22:46.462796 | orchestrator | Monday 05 January 2026 01:22:34 +0000 (0:00:00.504) 0:00:06.181 ******** 2026-01-05 01:22:46.462802 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.462809 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:46.462815 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:46.462821 | orchestrator | 2026-01-05 01:22:46.462828 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 01:22:46.462834 | orchestrator | Monday 05 January 2026 01:22:34 +0000 (0:00:00.325) 0:00:06.507 ******** 2026-01-05 01:22:46.462840 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.462846 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:46.462853 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:46.462859 | orchestrator | 2026-01-05 01:22:46.462865 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-01-05 01:22:46.462872 | orchestrator | Monday 05 January 2026 01:22:34 +0000 (0:00:00.312) 0:00:06.819 ******** 2026-01-05 01:22:46.462878 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-01-05 01:22:46.462888 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-01-05 01:22:46.462899 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.462908 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-01-05 01:22:46.462949 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-01-05 01:22:46.462962 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:46.462972 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-01-05 01:22:46.462983 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-01-05 01:22:46.462993 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:46.463005 | orchestrator | 2026-01-05 01:22:46.463015 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-01-05 01:22:46.463026 | orchestrator | Monday 05 January 2026 01:22:35 +0000 (0:00:00.351) 0:00:07.171 ******** 2026-01-05 01:22:46.463034 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463042 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:46.463049 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:46.463056 | orchestrator | 2026-01-05 01:22:46.463064 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-05 01:22:46.463071 | orchestrator | Monday 05 January 2026 01:22:35 +0000 (0:00:00.543) 0:00:07.714 ******** 2026-01-05 01:22:46.463078 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.463085 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:46.463093 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:46.463099 | orchestrator | 2026-01-05 01:22:46.463106 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-05 01:22:46.463113 | orchestrator | Monday 05 January 2026 01:22:36 +0000 (0:00:00.318) 0:00:08.033 ******** 2026-01-05 01:22:46.463121 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.463128 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:46.463141 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:46.463149 | orchestrator | 2026-01-05 01:22:46.463156 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-01-05 01:22:46.463163 | orchestrator | Monday 05 January 2026 01:22:36 +0000 (0:00:00.297) 0:00:08.330 ******** 2026-01-05 01:22:46.463170 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463177 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:46.463185 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:46.463192 | orchestrator | 2026-01-05 01:22:46.463200 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 01:22:46.463207 | orchestrator | Monday 05 January 2026 01:22:36 +0000 (0:00:00.306) 0:00:08.637 ******** 2026-01-05 01:22:46.463215 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.463222 | orchestrator | 2026-01-05 01:22:46.463246 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 01:22:46.463253 | orchestrator | Monday 05 January 2026 01:22:37 +0000 (0:00:00.747) 0:00:09.384 ******** 2026-01-05 01:22:46.463259 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.463265 | orchestrator | 2026-01-05 01:22:46.463271 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 01:22:46.463277 | orchestrator | Monday 05 January 2026 01:22:37 +0000 (0:00:00.262) 0:00:09.647 ******** 2026-01-05 01:22:46.463284 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.463290 | orchestrator | 2026-01-05 01:22:46.463296 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:46.463302 | orchestrator | Monday 05 January 2026 01:22:38 +0000 (0:00:00.279) 0:00:09.926 ******** 2026-01-05 01:22:46.463309 | orchestrator | 2026-01-05 01:22:46.463315 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:46.463321 | orchestrator | Monday 05 January 2026 01:22:38 +0000 (0:00:00.071) 0:00:09.998 ******** 2026-01-05 01:22:46.463327 | orchestrator | 2026-01-05 01:22:46.463334 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:46.463340 | orchestrator | Monday 05 January 2026 01:22:38 +0000 (0:00:00.090) 0:00:10.088 ******** 2026-01-05 01:22:46.463346 | orchestrator | 2026-01-05 01:22:46.463356 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 01:22:46.463363 | orchestrator | Monday 05 January 2026 01:22:38 +0000 (0:00:00.071) 0:00:10.160 ******** 2026-01-05 01:22:46.463369 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.463375 | orchestrator | 2026-01-05 01:22:46.463382 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-01-05 01:22:46.463388 | orchestrator | Monday 05 January 2026 01:22:38 +0000 (0:00:00.250) 0:00:10.410 ******** 2026-01-05 01:22:46.463394 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.463400 | orchestrator | 2026-01-05 01:22:46.463407 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 01:22:46.463413 | orchestrator | Monday 05 January 2026 01:22:38 +0000 (0:00:00.279) 0:00:10.689 ******** 2026-01-05 01:22:46.463419 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463425 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:46.463432 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:46.463438 | orchestrator | 2026-01-05 01:22:46.463444 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-01-05 01:22:46.463450 | orchestrator | Monday 05 January 2026 01:22:39 +0000 (0:00:00.305) 0:00:10.995 ******** 2026-01-05 01:22:46.463457 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463463 | orchestrator | 2026-01-05 01:22:46.463469 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-01-05 01:22:46.463476 | orchestrator | Monday 05 January 2026 01:22:39 +0000 (0:00:00.266) 0:00:11.261 ******** 2026-01-05 01:22:46.463482 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:22:46.463488 | orchestrator | 2026-01-05 01:22:46.463494 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-01-05 01:22:46.463501 | orchestrator | Monday 05 January 2026 01:22:41 +0000 (0:00:02.153) 0:00:13.415 ******** 2026-01-05 01:22:46.463511 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463518 | orchestrator | 2026-01-05 01:22:46.463524 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-01-05 01:22:46.463530 | orchestrator | Monday 05 January 2026 01:22:41 +0000 (0:00:00.137) 0:00:13.552 ******** 2026-01-05 01:22:46.463538 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463548 | orchestrator | 2026-01-05 01:22:46.463563 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-01-05 01:22:46.463575 | orchestrator | Monday 05 January 2026 01:22:42 +0000 (0:00:00.359) 0:00:13.912 ******** 2026-01-05 01:22:46.463585 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.463646 | orchestrator | 2026-01-05 01:22:46.463658 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-01-05 01:22:46.463667 | orchestrator | Monday 05 January 2026 01:22:42 +0000 (0:00:00.192) 0:00:14.104 ******** 2026-01-05 01:22:46.463675 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463683 | orchestrator | 2026-01-05 01:22:46.463692 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 01:22:46.463702 | orchestrator | Monday 05 January 2026 01:22:42 +0000 (0:00:00.138) 0:00:14.243 ******** 2026-01-05 01:22:46.463713 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463722 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:46.463733 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:46.463744 | orchestrator | 2026-01-05 01:22:46.463754 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-01-05 01:22:46.463765 | orchestrator | Monday 05 January 2026 01:22:42 +0000 (0:00:00.307) 0:00:14.550 ******** 2026-01-05 01:22:46.463776 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:22:46.463782 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:22:46.463788 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:22:46.463794 | orchestrator | 2026-01-05 01:22:46.463800 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-01-05 01:22:46.463807 | orchestrator | Monday 05 January 2026 01:22:45 +0000 (0:00:02.300) 0:00:16.851 ******** 2026-01-05 01:22:46.463813 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463819 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:46.463825 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:46.463831 | orchestrator | 2026-01-05 01:22:46.463838 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-01-05 01:22:46.463844 | orchestrator | Monday 05 January 2026 01:22:45 +0000 (0:00:00.580) 0:00:17.431 ******** 2026-01-05 01:22:46.463850 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:46.463856 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:46.463862 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:46.463868 | orchestrator | 2026-01-05 01:22:46.463874 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-01-05 01:22:46.463880 | orchestrator | Monday 05 January 2026 01:22:46 +0000 (0:00:00.512) 0:00:17.943 ******** 2026-01-05 01:22:46.463887 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:46.463893 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:46.463899 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:46.463905 | orchestrator | 2026-01-05 01:22:46.463920 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-01-05 01:22:55.977735 | orchestrator | Monday 05 January 2026 01:22:46 +0000 (0:00:00.337) 0:00:18.281 ******** 2026-01-05 01:22:55.977841 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:55.977853 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:55.977860 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:55.977868 | orchestrator | 2026-01-05 01:22:55.977877 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-01-05 01:22:55.977884 | orchestrator | Monday 05 January 2026 01:22:46 +0000 (0:00:00.510) 0:00:18.791 ******** 2026-01-05 01:22:55.977891 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:55.977900 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:55.977929 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:55.977937 | orchestrator | 2026-01-05 01:22:55.977944 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-01-05 01:22:55.977951 | orchestrator | Monday 05 January 2026 01:22:47 +0000 (0:00:00.357) 0:00:19.148 ******** 2026-01-05 01:22:55.977958 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:55.977965 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:55.977971 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:55.977978 | orchestrator | 2026-01-05 01:22:55.977985 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 01:22:55.978005 | orchestrator | Monday 05 January 2026 01:22:47 +0000 (0:00:00.316) 0:00:19.465 ******** 2026-01-05 01:22:55.978064 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:55.978074 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:55.978080 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:55.978087 | orchestrator | 2026-01-05 01:22:55.978094 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-01-05 01:22:55.978100 | orchestrator | Monday 05 January 2026 01:22:48 +0000 (0:00:00.500) 0:00:19.966 ******** 2026-01-05 01:22:55.978106 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:55.978113 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:55.978120 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:55.978127 | orchestrator | 2026-01-05 01:22:55.978133 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-01-05 01:22:55.978140 | orchestrator | Monday 05 January 2026 01:22:49 +0000 (0:00:00.971) 0:00:20.937 ******** 2026-01-05 01:22:55.978146 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:55.978153 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:55.978159 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:55.978166 | orchestrator | 2026-01-05 01:22:55.978172 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-01-05 01:22:55.978179 | orchestrator | Monday 05 January 2026 01:22:49 +0000 (0:00:00.303) 0:00:21.240 ******** 2026-01-05 01:22:55.978187 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:55.978195 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:55.978201 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:55.978208 | orchestrator | 2026-01-05 01:22:55.978215 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-01-05 01:22:55.978222 | orchestrator | Monday 05 January 2026 01:22:49 +0000 (0:00:00.327) 0:00:21.567 ******** 2026-01-05 01:22:55.978229 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:55.978236 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:55.978243 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:55.978251 | orchestrator | 2026-01-05 01:22:55.978258 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-05 01:22:55.978266 | orchestrator | Monday 05 January 2026 01:22:50 +0000 (0:00:00.316) 0:00:21.883 ******** 2026-01-05 01:22:55.978275 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:55.978282 | orchestrator | 2026-01-05 01:22:55.978290 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-05 01:22:55.978297 | orchestrator | Monday 05 January 2026 01:22:50 +0000 (0:00:00.548) 0:00:22.431 ******** 2026-01-05 01:22:55.978304 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:55.978312 | orchestrator | 2026-01-05 01:22:55.978319 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 01:22:55.978324 | orchestrator | Monday 05 January 2026 01:22:51 +0000 (0:00:00.725) 0:00:23.157 ******** 2026-01-05 01:22:55.978330 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:55.978336 | orchestrator | 2026-01-05 01:22:55.978343 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 01:22:55.978351 | orchestrator | Monday 05 January 2026 01:22:52 +0000 (0:00:01.608) 0:00:24.766 ******** 2026-01-05 01:22:55.978358 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:55.978366 | orchestrator | 2026-01-05 01:22:55.978374 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 01:22:55.978390 | orchestrator | Monday 05 January 2026 01:22:53 +0000 (0:00:00.282) 0:00:25.048 ******** 2026-01-05 01:22:55.978398 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:55.978405 | orchestrator | 2026-01-05 01:22:55.978413 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:55.978421 | orchestrator | Monday 05 January 2026 01:22:53 +0000 (0:00:00.282) 0:00:25.330 ******** 2026-01-05 01:22:55.978428 | orchestrator | 2026-01-05 01:22:55.978436 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:55.978443 | orchestrator | Monday 05 January 2026 01:22:53 +0000 (0:00:00.082) 0:00:25.413 ******** 2026-01-05 01:22:55.978451 | orchestrator | 2026-01-05 01:22:55.978459 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 01:22:55.978467 | orchestrator | Monday 05 January 2026 01:22:53 +0000 (0:00:00.073) 0:00:25.486 ******** 2026-01-05 01:22:55.978475 | orchestrator | 2026-01-05 01:22:55.978483 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-05 01:22:55.978490 | orchestrator | Monday 05 January 2026 01:22:53 +0000 (0:00:00.072) 0:00:25.559 ******** 2026-01-05 01:22:55.978498 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:22:55.978505 | orchestrator | 2026-01-05 01:22:55.978513 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 01:22:55.978520 | orchestrator | Monday 05 January 2026 01:22:55 +0000 (0:00:01.381) 0:00:26.941 ******** 2026-01-05 01:22:55.978550 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-01-05 01:22:55.978557 | orchestrator |  "msg": [ 2026-01-05 01:22:55.978566 | orchestrator |  "Validator run completed.", 2026-01-05 01:22:55.978574 | orchestrator |  "You can find the report file here:", 2026-01-05 01:22:55.978581 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-01-05T01:22:29+00:00-report.json", 2026-01-05 01:22:55.978589 | orchestrator |  "on the following host:", 2026-01-05 01:22:55.978597 | orchestrator |  "testbed-manager" 2026-01-05 01:22:55.978604 | orchestrator |  ] 2026-01-05 01:22:55.978610 | orchestrator | } 2026-01-05 01:22:55.978617 | orchestrator | 2026-01-05 01:22:55.978623 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:22:55.978630 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 01:22:55.978660 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 01:22:55.978674 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 01:22:55.978680 | orchestrator | 2026-01-05 01:22:55.978687 | orchestrator | 2026-01-05 01:22:55.978694 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:22:55.978701 | orchestrator | Monday 05 January 2026 01:22:55 +0000 (0:00:00.433) 0:00:27.375 ******** 2026-01-05 01:22:55.978708 | orchestrator | =============================================================================== 2026-01-05 01:22:55.978715 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.30s 2026-01-05 01:22:55.978723 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.15s 2026-01-05 01:22:55.978730 | orchestrator | Aggregate test results step one ----------------------------------------- 1.61s 2026-01-05 01:22:55.978737 | orchestrator | Write report file ------------------------------------------------------- 1.38s 2026-01-05 01:22:55.978744 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.97s 2026-01-05 01:22:55.978751 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-01-05 01:22:55.978758 | orchestrator | Aggregate test results step one ----------------------------------------- 0.75s 2026-01-05 01:22:55.978772 | orchestrator | Create report output directory ------------------------------------------ 0.74s 2026-01-05 01:22:55.978779 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.73s 2026-01-05 01:22:55.978786 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.59s 2026-01-05 01:22:55.978793 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.58s 2026-01-05 01:22:55.978800 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.57s 2026-01-05 01:22:55.978807 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.55s 2026-01-05 01:22:55.978814 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.54s 2026-01-05 01:22:55.978821 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.52s 2026-01-05 01:22:55.978828 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2026-01-05 01:22:55.978834 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.51s 2026-01-05 01:22:55.978841 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2026-01-05 01:22:55.978848 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-01-05 01:22:55.978855 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-01-05 01:22:56.342735 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-01-05 01:22:56.348562 | orchestrator | + set -e 2026-01-05 01:22:56.348629 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 01:22:56.348634 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 01:22:56.348639 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 01:22:56.348661 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 01:22:56.348665 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 01:22:56.348669 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 01:22:56.349611 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 01:22:56.349655 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 01:22:56.349661 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 01:22:56.349667 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 01:22:56.349672 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 01:22:56.349676 | orchestrator | ++ export ARA=false 2026-01-05 01:22:56.349680 | orchestrator | ++ ARA=false 2026-01-05 01:22:56.349684 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 01:22:56.349688 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 01:22:56.349692 | orchestrator | ++ export TEMPEST=true 2026-01-05 01:22:56.349696 | orchestrator | ++ TEMPEST=true 2026-01-05 01:22:56.349700 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 01:22:56.349704 | orchestrator | ++ IS_ZUUL=true 2026-01-05 01:22:56.349708 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 01:22:56.349712 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2026-01-05 01:22:56.349716 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 01:22:56.349720 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 01:22:56.349724 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 01:22:56.349728 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 01:22:56.349732 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 01:22:56.349735 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 01:22:56.349739 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 01:22:56.349743 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 01:22:56.349747 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-05 01:22:56.349750 | orchestrator | + source /etc/os-release 2026-01-05 01:22:56.349754 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-01-05 01:22:56.349758 | orchestrator | ++ NAME=Ubuntu 2026-01-05 01:22:56.349762 | orchestrator | ++ VERSION_ID=24.04 2026-01-05 01:22:56.349765 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-01-05 01:22:56.349769 | orchestrator | ++ VERSION_CODENAME=noble 2026-01-05 01:22:56.349773 | orchestrator | ++ ID=ubuntu 2026-01-05 01:22:56.349777 | orchestrator | ++ ID_LIKE=debian 2026-01-05 01:22:56.349781 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-01-05 01:22:56.349785 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-01-05 01:22:56.349788 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-01-05 01:22:56.349793 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-01-05 01:22:56.349798 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-01-05 01:22:56.349819 | orchestrator | ++ LOGO=ubuntu-logo 2026-01-05 01:22:56.349823 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-01-05 01:22:56.349828 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-01-05 01:22:56.349833 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-05 01:22:56.372257 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-05 01:23:21.690902 | orchestrator | 2026-01-05 01:23:21.691014 | orchestrator | # Status of Elasticsearch 2026-01-05 01:23:21.691031 | orchestrator | 2026-01-05 01:23:21.691044 | orchestrator | + pushd /opt/configuration/contrib 2026-01-05 01:23:21.691056 | orchestrator | + echo 2026-01-05 01:23:21.691068 | orchestrator | + echo '# Status of Elasticsearch' 2026-01-05 01:23:21.691079 | orchestrator | + echo 2026-01-05 01:23:21.691090 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-01-05 01:23:21.852606 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-01-05 01:23:21.852847 | orchestrator | 2026-01-05 01:23:21.852862 | orchestrator | # Status of MariaDB 2026-01-05 01:23:21.852869 | orchestrator | 2026-01-05 01:23:21.852876 | orchestrator | + echo 2026-01-05 01:23:21.852883 | orchestrator | + echo '# Status of MariaDB' 2026-01-05 01:23:21.852890 | orchestrator | + echo 2026-01-05 01:23:21.853519 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-05 01:23:21.911878 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 01:23:21.911955 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-05 01:23:21.911962 | orchestrator | + MARIADB_USER=root_shard_0 2026-01-05 01:23:21.911967 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-01-05 01:23:21.982698 | orchestrator | Reading package lists... 2026-01-05 01:23:22.332813 | orchestrator | Building dependency tree... 2026-01-05 01:23:22.333050 | orchestrator | Reading state information... 2026-01-05 01:23:22.726266 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-01-05 01:23:22.726360 | orchestrator | bc set to manually installed. 2026-01-05 01:23:22.726369 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-01-05 01:23:23.429319 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-01-05 01:23:23.430464 | orchestrator | 2026-01-05 01:23:23.430507 | orchestrator | # Status of Prometheus 2026-01-05 01:23:23.430516 | orchestrator | 2026-01-05 01:23:23.430523 | orchestrator | + echo 2026-01-05 01:23:23.430531 | orchestrator | + echo '# Status of Prometheus' 2026-01-05 01:23:23.430538 | orchestrator | + echo 2026-01-05 01:23:23.430546 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-01-05 01:23:23.510440 | orchestrator | Unauthorized 2026-01-05 01:23:23.514212 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-01-05 01:23:23.581176 | orchestrator | Unauthorized 2026-01-05 01:23:23.583264 | orchestrator | 2026-01-05 01:23:23.583299 | orchestrator | # Status of RabbitMQ 2026-01-05 01:23:23.583306 | orchestrator | + echo 2026-01-05 01:23:23.583312 | orchestrator | + echo '# Status of RabbitMQ' 2026-01-05 01:23:23.583317 | orchestrator | + echo 2026-01-05 01:23:23.583322 | orchestrator | 2026-01-05 01:23:23.584449 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-05 01:23:23.633999 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 01:23:23.634156 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-05 01:23:23.634174 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-01-05 01:23:24.133542 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-01-05 01:23:24.142979 | orchestrator | 2026-01-05 01:23:24.143068 | orchestrator | # Status of Redis 2026-01-05 01:23:24.143076 | orchestrator | 2026-01-05 01:23:24.143083 | orchestrator | + echo 2026-01-05 01:23:24.143091 | orchestrator | + echo '# Status of Redis' 2026-01-05 01:23:24.143098 | orchestrator | + echo 2026-01-05 01:23:24.143107 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-01-05 01:23:24.147614 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001723s;;;0.000000;10.000000 2026-01-05 01:23:24.147738 | orchestrator | 2026-01-05 01:23:24.147750 | orchestrator | # Create backup of MariaDB database 2026-01-05 01:23:24.147759 | orchestrator | + popd 2026-01-05 01:23:24.147766 | orchestrator | + echo 2026-01-05 01:23:24.147773 | orchestrator | + echo '# Create backup of MariaDB database' 2026-01-05 01:23:24.147780 | orchestrator | + echo 2026-01-05 01:23:24.147812 | orchestrator | 2026-01-05 01:23:24.147819 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-01-05 01:23:26.192310 | orchestrator | 2026-01-05 01:23:26 | INFO  | Task db57fb96-f632-4a2b-b42e-8f3c64247f28 (mariadb_backup) was prepared for execution. 2026-01-05 01:23:26.192391 | orchestrator | 2026-01-05 01:23:26 | INFO  | It takes a moment until task db57fb96-f632-4a2b-b42e-8f3c64247f28 (mariadb_backup) has been started and output is visible here. 2026-01-05 01:23:53.970910 | orchestrator | 2026-01-05 01:23:53.971126 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:23:53.971149 | orchestrator | 2026-01-05 01:23:53.971161 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:23:53.971174 | orchestrator | Monday 05 January 2026 01:23:30 +0000 (0:00:00.180) 0:00:00.180 ******** 2026-01-05 01:23:53.971185 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:23:53.971198 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:23:53.971208 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:23:53.971220 | orchestrator | 2026-01-05 01:23:53.971231 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:23:53.971242 | orchestrator | Monday 05 January 2026 01:23:30 +0000 (0:00:00.339) 0:00:00.520 ******** 2026-01-05 01:23:53.971255 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-05 01:23:53.971267 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-05 01:23:53.971277 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-05 01:23:53.971288 | orchestrator | 2026-01-05 01:23:53.971299 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-05 01:23:53.971310 | orchestrator | 2026-01-05 01:23:53.971321 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-05 01:23:53.971332 | orchestrator | Monday 05 January 2026 01:23:31 +0000 (0:00:00.599) 0:00:01.119 ******** 2026-01-05 01:23:53.971343 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 01:23:53.971354 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 01:23:53.971365 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 01:23:53.971376 | orchestrator | 2026-01-05 01:23:53.971387 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 01:23:53.971401 | orchestrator | Monday 05 January 2026 01:23:31 +0000 (0:00:00.409) 0:00:01.529 ******** 2026-01-05 01:23:53.971414 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:23:53.971428 | orchestrator | 2026-01-05 01:23:53.971441 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-01-05 01:23:53.971454 | orchestrator | Monday 05 January 2026 01:23:32 +0000 (0:00:00.601) 0:00:02.131 ******** 2026-01-05 01:23:53.971467 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:23:53.971479 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:23:53.971492 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:23:53.971505 | orchestrator | 2026-01-05 01:23:53.971517 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-01-05 01:23:53.971530 | orchestrator | Monday 05 January 2026 01:23:35 +0000 (0:00:03.339) 0:00:05.470 ******** 2026-01-05 01:23:53.971542 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-05 01:23:53.971556 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-05 01:23:53.971570 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 01:23:53.971600 | orchestrator | mariadb_bootstrap_restart 2026-01-05 01:23:53.971636 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:23:53.971706 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:23:53.971721 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:23:53.971734 | orchestrator | 2026-01-05 01:23:53.971747 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-05 01:23:53.971758 | orchestrator | skipping: no hosts matched 2026-01-05 01:23:53.971769 | orchestrator | 2026-01-05 01:23:53.971780 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 01:23:53.971791 | orchestrator | skipping: no hosts matched 2026-01-05 01:23:53.971802 | orchestrator | 2026-01-05 01:23:53.971812 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-05 01:23:53.971824 | orchestrator | skipping: no hosts matched 2026-01-05 01:23:53.971834 | orchestrator | 2026-01-05 01:23:53.971845 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-05 01:23:53.971856 | orchestrator | 2026-01-05 01:23:53.971867 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-05 01:23:53.971878 | orchestrator | Monday 05 January 2026 01:23:52 +0000 (0:00:17.137) 0:00:22.608 ******** 2026-01-05 01:23:53.971889 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:23:53.971899 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:23:53.971910 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:23:53.971921 | orchestrator | 2026-01-05 01:23:53.971958 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-05 01:23:53.971970 | orchestrator | Monday 05 January 2026 01:23:53 +0000 (0:00:00.359) 0:00:22.968 ******** 2026-01-05 01:23:53.971981 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:23:53.971992 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:23:53.972003 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:23:53.972020 | orchestrator | 2026-01-05 01:23:53.972038 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:23:53.972055 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:23:53.972073 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:23:53.972090 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:23:53.972109 | orchestrator | 2026-01-05 01:23:53.972126 | orchestrator | 2026-01-05 01:23:53.972145 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:23:53.972164 | orchestrator | Monday 05 January 2026 01:23:53 +0000 (0:00:00.417) 0:00:23.385 ******** 2026-01-05 01:23:53.972181 | orchestrator | =============================================================================== 2026-01-05 01:23:53.972194 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.14s 2026-01-05 01:23:53.972240 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.34s 2026-01-05 01:23:53.972260 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.60s 2026-01-05 01:23:53.972275 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-01-05 01:23:53.972290 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.42s 2026-01-05 01:23:53.972306 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2026-01-05 01:23:53.972324 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.36s 2026-01-05 01:23:53.972343 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-01-05 01:23:54.329270 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-01-05 01:23:54.336713 | orchestrator | + set -e 2026-01-05 01:23:54.336826 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 01:23:54.336846 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 01:23:54.336858 | orchestrator | ++ INTERACTIVE=false 2026-01-05 01:23:54.336898 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 01:23:54.336909 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 01:23:54.336921 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-05 01:23:54.337803 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-05 01:23:54.343722 | orchestrator | 2026-01-05 01:23:54.343789 | orchestrator | # OpenStack endpoints 2026-01-05 01:23:54.343803 | orchestrator | 2026-01-05 01:23:54.343815 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 01:23:54.343827 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 01:23:54.343838 | orchestrator | + export OS_CLOUD=admin 2026-01-05 01:23:54.343849 | orchestrator | + OS_CLOUD=admin 2026-01-05 01:23:54.343860 | orchestrator | + echo 2026-01-05 01:23:54.343871 | orchestrator | + echo '# OpenStack endpoints' 2026-01-05 01:23:54.343882 | orchestrator | + echo 2026-01-05 01:23:54.343893 | orchestrator | + openstack endpoint list 2026-01-05 01:23:57.735189 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-05 01:23:57.735295 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-01-05 01:23:57.735307 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-05 01:23:57.735313 | orchestrator | | 0f7452665eef473781ce6a85b9184b68 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-01-05 01:23:57.735318 | orchestrator | | 150b99edeb7b46cca098c2d1adc0f15b | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-01-05 01:23:57.735335 | orchestrator | | 2155ff4fbac448149dd953ba7d56506c | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-01-05 01:23:57.735340 | orchestrator | | 3e3eb52b041340b5888d8f92bae8aac4 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-01-05 01:23:57.735345 | orchestrator | | 4624ce5a5716436cb5f3c78d61602dbc | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-05 01:23:57.735350 | orchestrator | | 538178d7def841079290b11e05b45f44 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-01-05 01:23:57.735354 | orchestrator | | 5cb34f2025264299b1afca92936ca153 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-01-05 01:23:57.735359 | orchestrator | | 6e128b29ca36410594d32a8a75b1104d | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-01-05 01:23:57.735364 | orchestrator | | 6ed3a7f80a4747a1bc121d24537bae33 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-01-05 01:23:57.735369 | orchestrator | | 725846d5ea794713871d63b73ab3c57a | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-01-05 01:23:57.735374 | orchestrator | | 73ac51aa3af34b5685b8ba6d77712644 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-01-05 01:23:57.735379 | orchestrator | | 7dd067647c894ea1a569af571a9230d8 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-05 01:23:57.735384 | orchestrator | | 81842191ec654cd1a049f378ffcc23a7 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-01-05 01:23:57.735409 | orchestrator | | 8ddae5d9de7c432daf10f85ef2b8f20d | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-01-05 01:23:57.735415 | orchestrator | | 96feee72956c44ffbc2251ba01664f66 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-01-05 01:23:57.735420 | orchestrator | | 98b7797043dc4d7a9d40edc30090be54 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-01-05 01:23:57.735425 | orchestrator | | ab35fda28cbb4b60856b07271b30eceb | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-05 01:23:57.735430 | orchestrator | | e5bc65d7d1434ebf8f16989a8eb87a3f | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-01-05 01:23:57.735435 | orchestrator | | e88ac7efc3a04d8895579076e7dd6de9 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-01-05 01:23:57.735440 | orchestrator | | ee8fa4a7e3d943f9a62d6a2825c88522 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-01-05 01:23:57.735461 | orchestrator | | f09cbe0b43ad4d3fb4e3d0c29be53025 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-05 01:23:57.735466 | orchestrator | | f80235046cbd432d9a38ed346ff09719 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-01-05 01:23:57.735472 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-05 01:23:57.989915 | orchestrator | 2026-01-05 01:23:57.990166 | orchestrator | # Cinder 2026-01-05 01:23:57.990184 | orchestrator | 2026-01-05 01:23:57.990197 | orchestrator | + echo 2026-01-05 01:23:57.990210 | orchestrator | + echo '# Cinder' 2026-01-05 01:23:57.990221 | orchestrator | + echo 2026-01-05 01:23:57.990233 | orchestrator | + openstack volume service list 2026-01-05 01:24:00.719207 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-05 01:24:00.719336 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-01-05 01:24:00.719359 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-05 01:24:00.719377 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-05T01:23:51.000000 | 2026-01-05 01:24:00.719417 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-05T01:23:51.000000 | 2026-01-05 01:24:00.719434 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-05T01:23:51.000000 | 2026-01-05 01:24:00.719449 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-01-05T01:23:50.000000 | 2026-01-05 01:24:00.719465 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-01-05T01:23:58.000000 | 2026-01-05 01:24:00.719481 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-01-05T01:24:00.000000 | 2026-01-05 01:24:00.719498 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-01-05T01:23:52.000000 | 2026-01-05 01:24:00.719515 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-01-05T01:23:56.000000 | 2026-01-05 01:24:00.719532 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-01-05T01:23:57.000000 | 2026-01-05 01:24:00.719549 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-05 01:24:00.998372 | orchestrator | 2026-01-05 01:24:00.998472 | orchestrator | # Neutron 2026-01-05 01:24:00.998480 | orchestrator | 2026-01-05 01:24:00.998486 | orchestrator | + echo 2026-01-05 01:24:00.998492 | orchestrator | + echo '# Neutron' 2026-01-05 01:24:00.998498 | orchestrator | + echo 2026-01-05 01:24:00.998504 | orchestrator | + openstack network agent list 2026-01-05 01:24:03.923779 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-05 01:24:03.923875 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-01-05 01:24:03.923885 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-05 01:24:03.923894 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-01-05 01:24:03.923902 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-01-05 01:24:03.923910 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-01-05 01:24:03.923918 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-01-05 01:24:03.923926 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-01-05 01:24:03.923934 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-01-05 01:24:03.923942 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-05 01:24:03.923950 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-05 01:24:03.923958 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-05 01:24:03.923966 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-05 01:24:04.232094 | orchestrator | + openstack network service provider list 2026-01-05 01:24:06.883652 | orchestrator | +---------------+------+---------+ 2026-01-05 01:24:06.883770 | orchestrator | | Service Type | Name | Default | 2026-01-05 01:24:06.883870 | orchestrator | +---------------+------+---------+ 2026-01-05 01:24:06.883887 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-01-05 01:24:06.883898 | orchestrator | +---------------+------+---------+ 2026-01-05 01:24:07.194365 | orchestrator | 2026-01-05 01:24:07.194446 | orchestrator | # Nova 2026-01-05 01:24:07.194453 | orchestrator | 2026-01-05 01:24:07.194458 | orchestrator | + echo 2026-01-05 01:24:07.194464 | orchestrator | + echo '# Nova' 2026-01-05 01:24:07.194469 | orchestrator | + echo 2026-01-05 01:24:07.194474 | orchestrator | + openstack compute service list 2026-01-05 01:24:09.913785 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-05 01:24:09.913876 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-01-05 01:24:09.913886 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-05 01:24:09.913894 | orchestrator | | fdd1642c-e2e1-4c99-9417-adc0fcc765d3 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-05T01:24:03.000000 | 2026-01-05 01:24:09.913902 | orchestrator | | 8f0fd683-ba18-48a5-8cb0-2c5cedb566f4 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-05T01:24:00.000000 | 2026-01-05 01:24:09.913930 | orchestrator | | a94707bb-3044-47d4-bb18-9e0b25a16eee | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-05T01:24:05.000000 | 2026-01-05 01:24:09.913938 | orchestrator | | b9c8d356-88be-4a6a-bca1-0e22ae09dc4b | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-01-05T01:24:01.000000 | 2026-01-05 01:24:09.913945 | orchestrator | | 91752908-d40e-403a-a7fb-fedc68daee88 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-01-05T01:24:02.000000 | 2026-01-05 01:24:09.913951 | orchestrator | | e03af1b3-83e1-4e61-9b64-8d99814aa82d | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-01-05T01:24:05.000000 | 2026-01-05 01:24:09.913958 | orchestrator | | 463d60d6-074a-4d1e-a148-772483aa0fc4 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-01-05T01:24:03.000000 | 2026-01-05 01:24:09.913965 | orchestrator | | ef5f68d5-b5ec-4470-9280-973293d28019 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-01-05T01:24:05.000000 | 2026-01-05 01:24:09.913972 | orchestrator | | 3ed3b700-e6b4-46b5-9881-f035cff02395 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-01-05T01:24:06.000000 | 2026-01-05 01:24:09.913979 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-05 01:24:10.196641 | orchestrator | + openstack hypervisor list 2026-01-05 01:24:13.514088 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-05 01:24:13.514169 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-01-05 01:24:13.514175 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-05 01:24:13.514179 | orchestrator | | ac9a7985-44ef-4e44-974a-35c0a28bee76 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-01-05 01:24:13.514184 | orchestrator | | e4ba4088-5353-42d8-b1e4-ad4ef1438446 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-01-05 01:24:13.514188 | orchestrator | | e839c90e-1b65-4897-9bc0-d7d61c015fe2 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-01-05 01:24:13.514192 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-05 01:24:13.777223 | orchestrator | 2026-01-05 01:24:13.777313 | orchestrator | # Run OpenStack test play 2026-01-05 01:24:13.777326 | orchestrator | 2026-01-05 01:24:13.777335 | orchestrator | + echo 2026-01-05 01:24:13.777342 | orchestrator | + echo '# Run OpenStack test play' 2026-01-05 01:24:13.777351 | orchestrator | + echo 2026-01-05 01:24:13.777359 | orchestrator | + osism apply --environment openstack test 2026-01-05 01:24:15.903934 | orchestrator | 2026-01-05 01:24:15 | INFO  | Trying to run play test in environment openstack 2026-01-05 01:24:26.058302 | orchestrator | 2026-01-05 01:24:26 | INFO  | Task ae8b1b22-c318-422b-9dc4-046529392130 (test) was prepared for execution. 2026-01-05 01:24:26.058405 | orchestrator | 2026-01-05 01:24:26 | INFO  | It takes a moment until task ae8b1b22-c318-422b-9dc4-046529392130 (test) has been started and output is visible here. 2026-01-05 01:31:31.325041 | orchestrator | 2026-01-05 01:31:31.325130 | orchestrator | PLAY [Create test project] ***************************************************** 2026-01-05 01:31:31.325139 | orchestrator | 2026-01-05 01:31:31.325144 | orchestrator | TASK [Create test domain] ****************************************************** 2026-01-05 01:31:31.325150 | orchestrator | Monday 05 January 2026 01:24:30 +0000 (0:00:00.084) 0:00:00.084 ******** 2026-01-05 01:31:31.325155 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325161 | orchestrator | 2026-01-05 01:31:31.325166 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-01-05 01:31:31.325171 | orchestrator | Monday 05 January 2026 01:24:33 +0000 (0:00:03.612) 0:00:03.696 ******** 2026-01-05 01:31:31.325176 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325181 | orchestrator | 2026-01-05 01:31:31.325185 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-01-05 01:31:31.325190 | orchestrator | Monday 05 January 2026 01:24:38 +0000 (0:00:04.344) 0:00:08.040 ******** 2026-01-05 01:31:31.325209 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325214 | orchestrator | 2026-01-05 01:31:31.325219 | orchestrator | TASK [Create test project] ***************************************************** 2026-01-05 01:31:31.325224 | orchestrator | Monday 05 January 2026 01:24:44 +0000 (0:00:06.406) 0:00:14.447 ******** 2026-01-05 01:31:31.325228 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325233 | orchestrator | 2026-01-05 01:31:31.325238 | orchestrator | TASK [Create test user] ******************************************************** 2026-01-05 01:31:31.325243 | orchestrator | Monday 05 January 2026 01:24:48 +0000 (0:00:04.052) 0:00:18.499 ******** 2026-01-05 01:31:31.325247 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325252 | orchestrator | 2026-01-05 01:31:31.325257 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-01-05 01:31:31.325262 | orchestrator | Monday 05 January 2026 01:24:52 +0000 (0:00:04.216) 0:00:22.716 ******** 2026-01-05 01:31:31.325266 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-01-05 01:31:31.325271 | orchestrator | changed: [localhost] => (item=member) 2026-01-05 01:31:31.325277 | orchestrator | changed: [localhost] => (item=creator) 2026-01-05 01:31:31.325282 | orchestrator | 2026-01-05 01:31:31.325287 | orchestrator | TASK [Create test server group] ************************************************ 2026-01-05 01:31:31.325291 | orchestrator | Monday 05 January 2026 01:25:04 +0000 (0:00:11.683) 0:00:34.399 ******** 2026-01-05 01:31:31.325296 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325301 | orchestrator | 2026-01-05 01:31:31.325305 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-01-05 01:31:31.325310 | orchestrator | Monday 05 January 2026 01:25:08 +0000 (0:00:04.348) 0:00:38.748 ******** 2026-01-05 01:31:31.325314 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325319 | orchestrator | 2026-01-05 01:31:31.325324 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-01-05 01:31:31.325338 | orchestrator | Monday 05 January 2026 01:25:13 +0000 (0:00:04.909) 0:00:43.657 ******** 2026-01-05 01:31:31.325343 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325348 | orchestrator | 2026-01-05 01:31:31.325353 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-01-05 01:31:31.325357 | orchestrator | Monday 05 January 2026 01:25:18 +0000 (0:00:04.268) 0:00:47.926 ******** 2026-01-05 01:31:31.325362 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325367 | orchestrator | 2026-01-05 01:31:31.325371 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-01-05 01:31:31.325376 | orchestrator | Monday 05 January 2026 01:25:22 +0000 (0:00:03.978) 0:00:51.904 ******** 2026-01-05 01:31:31.325380 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325387 | orchestrator | 2026-01-05 01:31:31.325395 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-01-05 01:31:31.325402 | orchestrator | Monday 05 January 2026 01:25:26 +0000 (0:00:04.127) 0:00:56.032 ******** 2026-01-05 01:31:31.325411 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325435 | orchestrator | 2026-01-05 01:31:31.325450 | orchestrator | TASK [Create test network] ***************************************************** 2026-01-05 01:31:31.325498 | orchestrator | Monday 05 January 2026 01:25:30 +0000 (0:00:04.030) 0:01:00.062 ******** 2026-01-05 01:31:31.325506 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325513 | orchestrator | 2026-01-05 01:31:31.325520 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-01-05 01:31:31.325527 | orchestrator | Monday 05 January 2026 01:25:35 +0000 (0:00:04.778) 0:01:04.840 ******** 2026-01-05 01:31:31.325534 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325541 | orchestrator | 2026-01-05 01:31:31.325549 | orchestrator | TASK [Create test router] ****************************************************** 2026-01-05 01:31:31.325556 | orchestrator | Monday 05 January 2026 01:25:40 +0000 (0:00:05.332) 0:01:10.173 ******** 2026-01-05 01:31:31.325563 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325570 | orchestrator | 2026-01-05 01:31:31.325586 | orchestrator | TASK [Create test instances] *************************************************** 2026-01-05 01:31:31.325593 | orchestrator | Monday 05 January 2026 01:25:50 +0000 (0:00:10.490) 0:01:20.663 ******** 2026-01-05 01:31:31.325601 | orchestrator | changed: [localhost] => (item=test) 2026-01-05 01:31:31.325608 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-05 01:31:31.325616 | orchestrator | 2026-01-05 01:31:31.325626 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 01:31:31.325634 | orchestrator | 2026-01-05 01:31:31.325642 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 01:31:31.325649 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-05 01:31:31.325657 | orchestrator | 2026-01-05 01:31:31.325665 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 01:31:31.325672 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-05 01:31:31.325678 | orchestrator | 2026-01-05 01:31:31.325683 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 01:31:31.325688 | orchestrator | 2026-01-05 01:31:31.325694 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 01:31:31.325699 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-05 01:31:31.325704 | orchestrator | 2026-01-05 01:31:31.325710 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-01-05 01:31:31.325730 | orchestrator | Monday 05 January 2026 01:30:07 +0000 (0:04:16.366) 0:05:37.030 ******** 2026-01-05 01:31:31.325735 | orchestrator | changed: [localhost] => (item=test) 2026-01-05 01:31:31.325741 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-05 01:31:31.325746 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-05 01:31:31.325752 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-05 01:31:31.325757 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-05 01:31:31.325762 | orchestrator | 2026-01-05 01:31:31.325768 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-01-05 01:31:31.325774 | orchestrator | Monday 05 January 2026 01:30:30 +0000 (0:00:23.045) 0:06:00.076 ******** 2026-01-05 01:31:31.325779 | orchestrator | changed: [localhost] => (item=test) 2026-01-05 01:31:31.325784 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-05 01:31:31.325789 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-05 01:31:31.325795 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-05 01:31:31.325800 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-05 01:31:31.325805 | orchestrator | 2026-01-05 01:31:31.325810 | orchestrator | TASK [Create test volume] ****************************************************** 2026-01-05 01:31:31.325816 | orchestrator | Monday 05 January 2026 01:31:04 +0000 (0:00:34.540) 0:06:34.616 ******** 2026-01-05 01:31:31.325821 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325826 | orchestrator | 2026-01-05 01:31:31.325832 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-01-05 01:31:31.325837 | orchestrator | Monday 05 January 2026 01:31:11 +0000 (0:00:06.399) 0:06:41.016 ******** 2026-01-05 01:31:31.325842 | orchestrator | changed: [localhost] 2026-01-05 01:31:31.325848 | orchestrator | 2026-01-05 01:31:31.325853 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-01-05 01:31:31.325858 | orchestrator | Monday 05 January 2026 01:31:25 +0000 (0:00:14.283) 0:06:55.300 ******** 2026-01-05 01:31:31.325864 | orchestrator | ok: [localhost] 2026-01-05 01:31:31.325869 | orchestrator | 2026-01-05 01:31:31.325875 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-01-05 01:31:31.325880 | orchestrator | Monday 05 January 2026 01:31:30 +0000 (0:00:05.488) 0:07:00.788 ******** 2026-01-05 01:31:31.325885 | orchestrator | ok: [localhost] => { 2026-01-05 01:31:31.325891 | orchestrator |  "msg": "192.168.112.132" 2026-01-05 01:31:31.325896 | orchestrator | } 2026-01-05 01:31:31.325902 | orchestrator | 2026-01-05 01:31:31.325907 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:31:31.325913 | orchestrator | localhost : ok=22  changed=20  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:31:31.325924 | orchestrator | 2026-01-05 01:31:31.325929 | orchestrator | 2026-01-05 01:31:31.325935 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:31:31.325946 | orchestrator | Monday 05 January 2026 01:31:31 +0000 (0:00:00.046) 0:07:00.834 ******** 2026-01-05 01:31:31.325951 | orchestrator | =============================================================================== 2026-01-05 01:31:31.325955 | orchestrator | Create test instances ------------------------------------------------- 256.37s 2026-01-05 01:31:31.325960 | orchestrator | Add tag to instances --------------------------------------------------- 34.54s 2026-01-05 01:31:31.325964 | orchestrator | Add metadata to instances ---------------------------------------------- 23.05s 2026-01-05 01:31:31.325969 | orchestrator | Attach test volume ----------------------------------------------------- 14.28s 2026-01-05 01:31:31.325973 | orchestrator | Add member roles to user test ------------------------------------------ 11.68s 2026-01-05 01:31:31.325978 | orchestrator | Create test router ----------------------------------------------------- 10.49s 2026-01-05 01:31:31.325982 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.41s 2026-01-05 01:31:31.325987 | orchestrator | Create test volume ------------------------------------------------------ 6.40s 2026-01-05 01:31:31.325992 | orchestrator | Create floating ip address ---------------------------------------------- 5.49s 2026-01-05 01:31:31.325996 | orchestrator | Create test subnet ------------------------------------------------------ 5.33s 2026-01-05 01:31:31.326001 | orchestrator | Create ssh security group ----------------------------------------------- 4.91s 2026-01-05 01:31:31.326005 | orchestrator | Create test network ----------------------------------------------------- 4.78s 2026-01-05 01:31:31.326010 | orchestrator | Create test server group ------------------------------------------------ 4.35s 2026-01-05 01:31:31.326051 | orchestrator | Create test-admin user -------------------------------------------------- 4.34s 2026-01-05 01:31:31.326057 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.27s 2026-01-05 01:31:31.326062 | orchestrator | Create test user -------------------------------------------------------- 4.22s 2026-01-05 01:31:31.326066 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.13s 2026-01-05 01:31:31.326071 | orchestrator | Create test project ----------------------------------------------------- 4.05s 2026-01-05 01:31:31.326075 | orchestrator | Create test keypair ----------------------------------------------------- 4.03s 2026-01-05 01:31:31.326080 | orchestrator | Create icmp security group ---------------------------------------------- 3.98s 2026-01-05 01:31:31.640219 | orchestrator | + server_list 2026-01-05 01:31:31.640342 | orchestrator | + openstack --os-cloud test server list 2026-01-05 01:31:35.163567 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-05 01:31:35.163704 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-01-05 01:31:35.163728 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-05 01:31:35.163746 | orchestrator | | 7c7bfb13-cbc3-430b-b589-bd6f2aefa74f | test-4 | ACTIVE | test=192.168.112.151, 192.168.200.73 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 01:31:35.163765 | orchestrator | | 7c20fdbf-88c9-4c40-8a33-284c55220a86 | test-3 | ACTIVE | test=192.168.112.159, 192.168.200.60 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 01:31:35.163785 | orchestrator | | 7f709f08-a560-4f31-8da6-5fb281af2458 | test-2 | ACTIVE | test=192.168.112.106, 192.168.200.34 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 01:31:35.163804 | orchestrator | | ae3006f5-9042-4903-901d-61f6a0235a19 | test-1 | ACTIVE | test=192.168.112.116, 192.168.200.116 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 01:31:35.163823 | orchestrator | | 57a4822b-1b3b-4773-aa33-fddfb0fea703 | test | ACTIVE | test=192.168.112.132, 192.168.200.239 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 01:31:35.163884 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-05 01:31:35.488512 | orchestrator | + openstack --os-cloud test server show test 2026-01-05 01:31:38.756381 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:38.756543 | orchestrator | | Field | Value | 2026-01-05 01:31:38.756570 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:38.756583 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 01:31:38.756595 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 01:31:38.756607 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 01:31:38.756618 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-01-05 01:31:38.756630 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 01:31:38.756641 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 01:31:38.756690 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 01:31:38.756703 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 01:31:38.756714 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 01:31:38.756730 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 01:31:38.756741 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 01:31:38.756752 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 01:31:38.756764 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 01:31:38.756775 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 01:31:38.756786 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 01:31:38.756805 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T01:26:35.000000 | 2026-01-05 01:31:38.756824 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 01:31:38.756838 | orchestrator | | accessIPv4 | | 2026-01-05 01:31:38.756852 | orchestrator | | accessIPv6 | | 2026-01-05 01:31:38.756866 | orchestrator | | addresses | test=192.168.112.132, 192.168.200.239 | 2026-01-05 01:31:38.756879 | orchestrator | | config_drive | | 2026-01-05 01:31:38.756893 | orchestrator | | created | 2026-01-05T01:25:59Z | 2026-01-05 01:31:38.756920 | orchestrator | | description | None | 2026-01-05 01:31:38.756940 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 01:31:38.756960 | orchestrator | | hostId | 7932437d9dbe1e22f4e65e044e6a705a61f7e0b2ef3486d6cb109aef | 2026-01-05 01:31:38.756973 | orchestrator | | host_status | None | 2026-01-05 01:31:38.756995 | orchestrator | | id | 57a4822b-1b3b-4773-aa33-fddfb0fea703 | 2026-01-05 01:31:38.757009 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 01:31:38.757022 | orchestrator | | key_name | test | 2026-01-05 01:31:38.757050 | orchestrator | | locked | False | 2026-01-05 01:31:38.757080 | orchestrator | | locked_reason | None | 2026-01-05 01:31:38.757104 | orchestrator | | name | test | 2026-01-05 01:31:38.757124 | orchestrator | | pinned_availability_zone | None | 2026-01-05 01:31:38.757145 | orchestrator | | progress | 0 | 2026-01-05 01:31:38.757208 | orchestrator | | project_id | a598b9e9d61346a586c070aeca79b6e6 | 2026-01-05 01:31:38.757230 | orchestrator | | properties | hostname='test' | 2026-01-05 01:31:38.757261 | orchestrator | | security_groups | name='ssh' | 2026-01-05 01:31:38.757280 | orchestrator | | | name='icmp' | 2026-01-05 01:31:38.757300 | orchestrator | | server_groups | None | 2026-01-05 01:31:38.757327 | orchestrator | | status | ACTIVE | 2026-01-05 01:31:38.757346 | orchestrator | | tags | test | 2026-01-05 01:31:38.757371 | orchestrator | | trusted_image_certificates | None | 2026-01-05 01:31:38.757395 | orchestrator | | updated | 2026-01-05T01:30:11Z | 2026-01-05 01:31:38.757432 | orchestrator | | user_id | 14d8e67a66b844649be30401744caec4 | 2026-01-05 01:31:38.757451 | orchestrator | | volumes_attached | delete_on_termination='True', id='bbf8cb4f-855f-4c3b-b811-60e75cbd1d92' | 2026-01-05 01:31:38.757467 | orchestrator | | | delete_on_termination='False', id='bdb5ecdb-ee0e-4e2f-afed-68b773ef3c7b' | 2026-01-05 01:31:38.760205 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:39.052587 | orchestrator | + openstack --os-cloud test server show test-1 2026-01-05 01:31:42.183917 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:42.184041 | orchestrator | | Field | Value | 2026-01-05 01:31:42.184056 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:42.184065 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 01:31:42.184074 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 01:31:42.184098 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 01:31:42.184107 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-01-05 01:31:42.184116 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 01:31:42.184124 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 01:31:42.184149 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 01:31:42.184211 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 01:31:42.184231 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 01:31:42.184251 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 01:31:42.184265 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 01:31:42.184314 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 01:31:42.184328 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 01:31:42.184339 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 01:31:42.184351 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 01:31:42.184364 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T01:27:30.000000 | 2026-01-05 01:31:42.184387 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 01:31:42.184401 | orchestrator | | accessIPv4 | | 2026-01-05 01:31:42.184414 | orchestrator | | accessIPv6 | | 2026-01-05 01:31:42.184428 | orchestrator | | addresses | test=192.168.112.116, 192.168.200.116 | 2026-01-05 01:31:42.184443 | orchestrator | | config_drive | | 2026-01-05 01:31:42.184473 | orchestrator | | created | 2026-01-05T01:26:56Z | 2026-01-05 01:31:42.184509 | orchestrator | | description | None | 2026-01-05 01:31:42.184520 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 01:31:42.184528 | orchestrator | | hostId | 42a2bf07f27d8072b0bf5e5e1a5e301085e7f8ddaeb32fdebc5c7f8c | 2026-01-05 01:31:42.184536 | orchestrator | | host_status | None | 2026-01-05 01:31:42.184552 | orchestrator | | id | ae3006f5-9042-4903-901d-61f6a0235a19 | 2026-01-05 01:31:42.184568 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 01:31:42.184580 | orchestrator | | key_name | test | 2026-01-05 01:31:42.184588 | orchestrator | | locked | False | 2026-01-05 01:31:42.184602 | orchestrator | | locked_reason | None | 2026-01-05 01:31:42.184610 | orchestrator | | name | test-1 | 2026-01-05 01:31:42.184618 | orchestrator | | pinned_availability_zone | None | 2026-01-05 01:31:42.184626 | orchestrator | | progress | 0 | 2026-01-05 01:31:42.184635 | orchestrator | | project_id | a598b9e9d61346a586c070aeca79b6e6 | 2026-01-05 01:31:42.184643 | orchestrator | | properties | hostname='test-1' | 2026-01-05 01:31:42.184657 | orchestrator | | security_groups | name='ssh' | 2026-01-05 01:31:42.184666 | orchestrator | | | name='icmp' | 2026-01-05 01:31:42.184678 | orchestrator | | server_groups | None | 2026-01-05 01:31:42.184691 | orchestrator | | status | ACTIVE | 2026-01-05 01:31:42.184699 | orchestrator | | tags | test | 2026-01-05 01:31:42.184707 | orchestrator | | trusted_image_certificates | None | 2026-01-05 01:31:42.184716 | orchestrator | | updated | 2026-01-05T01:30:16Z | 2026-01-05 01:31:42.184724 | orchestrator | | user_id | 14d8e67a66b844649be30401744caec4 | 2026-01-05 01:31:42.184732 | orchestrator | | volumes_attached | delete_on_termination='True', id='75b59b5d-da86-4593-bc87-7cf9b8cc0511' | 2026-01-05 01:31:42.184740 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:42.470240 | orchestrator | + openstack --os-cloud test server show test-2 2026-01-05 01:31:45.452406 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:45.452531 | orchestrator | | Field | Value | 2026-01-05 01:31:45.452580 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:45.452587 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 01:31:45.452593 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 01:31:45.452599 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 01:31:45.452605 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-01-05 01:31:45.452611 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 01:31:45.452616 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 01:31:45.452646 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 01:31:45.452652 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 01:31:45.452664 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 01:31:45.452675 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 01:31:45.452682 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 01:31:45.452688 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 01:31:45.452694 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 01:31:45.452700 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 01:31:45.452707 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 01:31:45.452713 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T01:28:22.000000 | 2026-01-05 01:31:45.452724 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 01:31:45.452729 | orchestrator | | accessIPv4 | | 2026-01-05 01:31:45.452740 | orchestrator | | accessIPv6 | | 2026-01-05 01:31:45.452750 | orchestrator | | addresses | test=192.168.112.106, 192.168.200.34 | 2026-01-05 01:31:45.452756 | orchestrator | | config_drive | | 2026-01-05 01:31:45.452762 | orchestrator | | created | 2026-01-05T01:27:47Z | 2026-01-05 01:31:45.452768 | orchestrator | | description | None | 2026-01-05 01:31:45.452775 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 01:31:45.452781 | orchestrator | | hostId | 72cdd3362a5c3512c8947c3764d673a9a7815369b2a37a8091678609 | 2026-01-05 01:31:45.452788 | orchestrator | | host_status | None | 2026-01-05 01:31:45.452799 | orchestrator | | id | 7f709f08-a560-4f31-8da6-5fb281af2458 | 2026-01-05 01:31:45.452810 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 01:31:45.452820 | orchestrator | | key_name | test | 2026-01-05 01:31:45.452827 | orchestrator | | locked | False | 2026-01-05 01:31:45.452833 | orchestrator | | locked_reason | None | 2026-01-05 01:31:45.452838 | orchestrator | | name | test-2 | 2026-01-05 01:31:45.452844 | orchestrator | | pinned_availability_zone | None | 2026-01-05 01:31:45.452850 | orchestrator | | progress | 0 | 2026-01-05 01:31:45.452857 | orchestrator | | project_id | a598b9e9d61346a586c070aeca79b6e6 | 2026-01-05 01:31:45.452863 | orchestrator | | properties | hostname='test-2' | 2026-01-05 01:31:45.452880 | orchestrator | | security_groups | name='ssh' | 2026-01-05 01:31:45.452886 | orchestrator | | | name='icmp' | 2026-01-05 01:31:45.452893 | orchestrator | | server_groups | None | 2026-01-05 01:31:45.452904 | orchestrator | | status | ACTIVE | 2026-01-05 01:31:45.452911 | orchestrator | | tags | test | 2026-01-05 01:31:45.452920 | orchestrator | | trusted_image_certificates | None | 2026-01-05 01:31:45.452929 | orchestrator | | updated | 2026-01-05T01:30:21Z | 2026-01-05 01:31:45.452938 | orchestrator | | user_id | 14d8e67a66b844649be30401744caec4 | 2026-01-05 01:31:45.452946 | orchestrator | | volumes_attached | delete_on_termination='True', id='5cf3f4e9-828e-45d1-a906-02f586a3c8f8' | 2026-01-05 01:31:45.458342 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:45.769266 | orchestrator | + openstack --os-cloud test server show test-3 2026-01-05 01:31:48.720900 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:48.721522 | orchestrator | | Field | Value | 2026-01-05 01:31:48.721560 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:48.721569 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 01:31:48.721576 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 01:31:48.721583 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 01:31:48.721589 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-01-05 01:31:48.721596 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 01:31:48.721614 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 01:31:48.721636 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 01:31:48.721643 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 01:31:48.721649 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 01:31:48.721659 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 01:31:48.721666 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 01:31:48.721672 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 01:31:48.721679 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 01:31:48.721685 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 01:31:48.721692 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 01:31:48.721715 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T01:29:09.000000 | 2026-01-05 01:31:48.721733 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 01:31:48.721745 | orchestrator | | accessIPv4 | | 2026-01-05 01:31:48.721752 | orchestrator | | accessIPv6 | | 2026-01-05 01:31:48.721762 | orchestrator | | addresses | test=192.168.112.159, 192.168.200.60 | 2026-01-05 01:31:48.721769 | orchestrator | | config_drive | | 2026-01-05 01:31:48.721775 | orchestrator | | created | 2026-01-05T01:28:41Z | 2026-01-05 01:31:48.721781 | orchestrator | | description | None | 2026-01-05 01:31:48.721788 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 01:31:48.721798 | orchestrator | | hostId | 42a2bf07f27d8072b0bf5e5e1a5e301085e7f8ddaeb32fdebc5c7f8c | 2026-01-05 01:31:48.721805 | orchestrator | | host_status | None | 2026-01-05 01:31:48.721816 | orchestrator | | id | 7c20fdbf-88c9-4c40-8a33-284c55220a86 | 2026-01-05 01:31:48.721823 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 01:31:48.721829 | orchestrator | | key_name | test | 2026-01-05 01:31:48.721838 | orchestrator | | locked | False | 2026-01-05 01:31:48.721845 | orchestrator | | locked_reason | None | 2026-01-05 01:31:48.721851 | orchestrator | | name | test-3 | 2026-01-05 01:31:48.721860 | orchestrator | | pinned_availability_zone | None | 2026-01-05 01:31:48.721876 | orchestrator | | progress | 0 | 2026-01-05 01:31:48.721888 | orchestrator | | project_id | a598b9e9d61346a586c070aeca79b6e6 | 2026-01-05 01:31:48.721899 | orchestrator | | properties | hostname='test-3' | 2026-01-05 01:31:48.721915 | orchestrator | | security_groups | name='ssh' | 2026-01-05 01:31:48.721922 | orchestrator | | | name='icmp' | 2026-01-05 01:31:48.721929 | orchestrator | | server_groups | None | 2026-01-05 01:31:48.721938 | orchestrator | | status | ACTIVE | 2026-01-05 01:31:48.721945 | orchestrator | | tags | test | 2026-01-05 01:31:48.721951 | orchestrator | | trusted_image_certificates | None | 2026-01-05 01:31:48.721962 | orchestrator | | updated | 2026-01-05T01:30:25Z | 2026-01-05 01:31:48.721969 | orchestrator | | user_id | 14d8e67a66b844649be30401744caec4 | 2026-01-05 01:31:48.721975 | orchestrator | | volumes_attached | delete_on_termination='True', id='8aa27fbf-c71c-4dae-95ba-42e2c6f477d8' | 2026-01-05 01:31:48.726268 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:48.998816 | orchestrator | + openstack --os-cloud test server show test-4 2026-01-05 01:31:52.022589 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:52.022672 | orchestrator | | Field | Value | 2026-01-05 01:31:52.022700 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:52.022713 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 01:31:52.022725 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 01:31:52.022754 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 01:31:52.022766 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-01-05 01:31:52.022773 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 01:31:52.022779 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 01:31:52.022798 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 01:31:52.022805 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 01:31:52.022812 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 01:31:52.022818 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 01:31:52.023064 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 01:31:52.023073 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 01:31:52.023085 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 01:31:52.023092 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 01:31:52.023099 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 01:31:52.023107 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T01:29:54.000000 | 2026-01-05 01:31:52.023120 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 01:31:52.023131 | orchestrator | | accessIPv4 | | 2026-01-05 01:31:52.023139 | orchestrator | | accessIPv6 | | 2026-01-05 01:31:52.023147 | orchestrator | | addresses | test=192.168.112.151, 192.168.200.73 | 2026-01-05 01:31:52.023155 | orchestrator | | config_drive | | 2026-01-05 01:31:52.023168 | orchestrator | | created | 2026-01-05T01:29:29Z | 2026-01-05 01:31:52.023176 | orchestrator | | description | None | 2026-01-05 01:31:52.023184 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 01:31:52.023192 | orchestrator | | hostId | 72cdd3362a5c3512c8947c3764d673a9a7815369b2a37a8091678609 | 2026-01-05 01:31:52.023199 | orchestrator | | host_status | None | 2026-01-05 01:31:52.023215 | orchestrator | | id | 7c7bfb13-cbc3-430b-b589-bd6f2aefa74f | 2026-01-05 01:31:52.023223 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 01:31:52.023232 | orchestrator | | key_name | test | 2026-01-05 01:31:52.023239 | orchestrator | | locked | False | 2026-01-05 01:31:52.023251 | orchestrator | | locked_reason | None | 2026-01-05 01:31:52.023259 | orchestrator | | name | test-4 | 2026-01-05 01:31:52.023267 | orchestrator | | pinned_availability_zone | None | 2026-01-05 01:31:52.023275 | orchestrator | | progress | 0 | 2026-01-05 01:31:52.023283 | orchestrator | | project_id | a598b9e9d61346a586c070aeca79b6e6 | 2026-01-05 01:31:52.023290 | orchestrator | | properties | hostname='test-4' | 2026-01-05 01:31:52.023305 | orchestrator | | security_groups | name='ssh' | 2026-01-05 01:31:52.023317 | orchestrator | | | name='icmp' | 2026-01-05 01:31:52.023329 | orchestrator | | server_groups | None | 2026-01-05 01:31:52.023346 | orchestrator | | status | ACTIVE | 2026-01-05 01:31:52.023357 | orchestrator | | tags | test | 2026-01-05 01:31:52.023368 | orchestrator | | trusted_image_certificates | None | 2026-01-05 01:31:52.023380 | orchestrator | | updated | 2026-01-05T01:30:30Z | 2026-01-05 01:31:52.023392 | orchestrator | | user_id | 14d8e67a66b844649be30401744caec4 | 2026-01-05 01:31:52.023404 | orchestrator | | volumes_attached | delete_on_termination='True', id='b1663303-c962-4073-b91f-29cc7eb0d820' | 2026-01-05 01:31:52.026497 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 01:31:52.334158 | orchestrator | + server_ping 2026-01-05 01:31:52.335091 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-05 01:31:52.335834 | orchestrator | ++ tr -d '\r' 2026-01-05 01:31:55.153116 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 01:31:55.153183 | orchestrator | + ping -c3 192.168.112.106 2026-01-05 01:31:55.172803 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2026-01-05 01:31:55.172858 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=9.89 ms 2026-01-05 01:31:56.167415 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=2.97 ms 2026-01-05 01:31:57.168544 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.98 ms 2026-01-05 01:31:57.168617 | orchestrator | 2026-01-05 01:31:57.168629 | orchestrator | --- 192.168.112.106 ping statistics --- 2026-01-05 01:31:57.168637 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-05 01:31:57.168645 | orchestrator | rtt min/avg/max/mdev = 1.978/4.945/9.891/3.520 ms 2026-01-05 01:31:57.168678 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 01:31:57.168687 | orchestrator | + ping -c3 192.168.112.132 2026-01-05 01:31:57.179905 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-01-05 01:31:57.179965 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=8.68 ms 2026-01-05 01:31:58.175960 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.54 ms 2026-01-05 01:31:59.178431 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=2.13 ms 2026-01-05 01:31:59.178559 | orchestrator | 2026-01-05 01:31:59.178569 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-01-05 01:31:59.178575 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-05 01:31:59.178580 | orchestrator | rtt min/avg/max/mdev = 2.131/4.450/8.684/2.998 ms 2026-01-05 01:31:59.178585 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 01:31:59.178590 | orchestrator | + ping -c3 192.168.112.151 2026-01-05 01:31:59.192621 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2026-01-05 01:31:59.192699 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=6.80 ms 2026-01-05 01:32:00.188936 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.01 ms 2026-01-05 01:32:01.190346 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=1.83 ms 2026-01-05 01:32:01.190456 | orchestrator | 2026-01-05 01:32:01.190472 | orchestrator | --- 192.168.112.151 ping statistics --- 2026-01-05 01:32:01.190485 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-05 01:32:01.190498 | orchestrator | rtt min/avg/max/mdev = 1.833/3.548/6.804/2.302 ms 2026-01-05 01:32:01.191155 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 01:32:01.191239 | orchestrator | + ping -c3 192.168.112.116 2026-01-05 01:32:01.206220 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-01-05 01:32:01.206311 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=9.49 ms 2026-01-05 01:32:02.200456 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.79 ms 2026-01-05 01:32:03.201844 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.96 ms 2026-01-05 01:32:03.201957 | orchestrator | 2026-01-05 01:32:03.201975 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-01-05 01:32:03.201989 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-05 01:32:03.202001 | orchestrator | rtt min/avg/max/mdev = 1.962/4.747/9.491/3.371 ms 2026-01-05 01:32:03.202598 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 01:32:03.202647 | orchestrator | + ping -c3 192.168.112.159 2026-01-05 01:32:03.217013 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2026-01-05 01:32:03.217131 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=9.39 ms 2026-01-05 01:32:04.211287 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.15 ms 2026-01-05 01:32:05.213291 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=2.29 ms 2026-01-05 01:32:05.213407 | orchestrator | 2026-01-05 01:32:05.213418 | orchestrator | --- 192.168.112.159 ping statistics --- 2026-01-05 01:32:05.213425 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-05 01:32:05.213430 | orchestrator | rtt min/avg/max/mdev = 2.152/4.608/9.386/3.378 ms 2026-01-05 01:32:05.213705 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-05 01:32:05.315924 | orchestrator | ok: Runtime: 0:11:47.609041 2026-01-05 01:32:05.351458 | 2026-01-05 01:32:05.351649 | TASK [Run tempest] 2026-01-05 01:32:06.056087 | orchestrator | 2026-01-05 01:32:06.056232 | orchestrator | # Tempest 2026-01-05 01:32:06.056242 | orchestrator | 2026-01-05 01:32:06.056248 | orchestrator | + set -e 2026-01-05 01:32:06.056253 | orchestrator | + echo 2026-01-05 01:32:06.056259 | orchestrator | + echo '# Tempest' 2026-01-05 01:32:06.056267 | orchestrator | + echo 2026-01-05 01:32:06.056291 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-01-05 01:32:18.163205 | orchestrator | 2026-01-05 01:32:18 | INFO  | Task 8fef5553-2ae5-427d-b4e9-43c1b10db5a8 (tempest) was prepared for execution. 2026-01-05 01:32:18.163318 | orchestrator | 2026-01-05 01:32:18 | INFO  | It takes a moment until task 8fef5553-2ae5-427d-b4e9-43c1b10db5a8 (tempest) has been started and output is visible here. 2026-01-05 01:33:37.753381 | orchestrator | 2026-01-05 01:33:37.753482 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-01-05 01:33:37.753490 | orchestrator | 2026-01-05 01:33:37.753495 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-01-05 01:33:37.753509 | orchestrator | Monday 05 January 2026 01:32:22 +0000 (0:00:00.260) 0:00:00.260 ******** 2026-01-05 01:33:37.753513 | orchestrator | changed: [testbed-manager] 2026-01-05 01:33:37.753519 | orchestrator | 2026-01-05 01:33:37.753522 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-01-05 01:33:37.753527 | orchestrator | Monday 05 January 2026 01:32:23 +0000 (0:00:00.727) 0:00:00.987 ******** 2026-01-05 01:33:37.753531 | orchestrator | changed: [testbed-manager] 2026-01-05 01:33:37.753535 | orchestrator | 2026-01-05 01:33:37.753545 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-01-05 01:33:37.753550 | orchestrator | Monday 05 January 2026 01:32:24 +0000 (0:00:01.253) 0:00:02.240 ******** 2026-01-05 01:33:37.753553 | orchestrator | ok: [testbed-manager] 2026-01-05 01:33:37.753558 | orchestrator | 2026-01-05 01:33:37.753562 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-01-05 01:33:37.753566 | orchestrator | Monday 05 January 2026 01:32:25 +0000 (0:00:00.450) 0:00:02.691 ******** 2026-01-05 01:33:37.753570 | orchestrator | changed: [testbed-manager] 2026-01-05 01:33:37.753573 | orchestrator | 2026-01-05 01:33:37.753577 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-01-05 01:33:37.753581 | orchestrator | Monday 05 January 2026 01:32:47 +0000 (0:00:22.037) 0:00:24.729 ******** 2026-01-05 01:33:37.753585 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-01-05 01:33:37.753589 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-01-05 01:33:37.753593 | orchestrator | 2026-01-05 01:33:37.753597 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-01-05 01:33:37.753600 | orchestrator | Monday 05 January 2026 01:32:55 +0000 (0:00:08.745) 0:00:33.475 ******** 2026-01-05 01:33:37.753604 | orchestrator | ok: [testbed-manager] => { 2026-01-05 01:33:37.753608 | orchestrator |  "changed": false, 2026-01-05 01:33:37.753612 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:33:37.753616 | orchestrator | } 2026-01-05 01:33:37.753621 | orchestrator | 2026-01-05 01:33:37.753624 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-01-05 01:33:37.753628 | orchestrator | Monday 05 January 2026 01:32:56 +0000 (0:00:00.149) 0:00:33.624 ******** 2026-01-05 01:33:37.753632 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:33:37.753636 | orchestrator | 2026-01-05 01:33:37.753640 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-01-05 01:33:37.753643 | orchestrator | Monday 05 January 2026 01:32:59 +0000 (0:00:03.562) 0:00:37.187 ******** 2026-01-05 01:33:37.753647 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:33:37.753651 | orchestrator | 2026-01-05 01:33:37.753655 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-01-05 01:33:37.753658 | orchestrator | Monday 05 January 2026 01:33:01 +0000 (0:00:01.724) 0:00:38.912 ******** 2026-01-05 01:33:37.753662 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:33:37.753666 | orchestrator | 2026-01-05 01:33:37.753670 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-01-05 01:33:37.753709 | orchestrator | Monday 05 January 2026 01:33:04 +0000 (0:00:03.648) 0:00:42.560 ******** 2026-01-05 01:33:37.753714 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:33:37.753764 | orchestrator | 2026-01-05 01:33:37.753769 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-01-05 01:33:37.753773 | orchestrator | Monday 05 January 2026 01:33:05 +0000 (0:00:00.195) 0:00:42.755 ******** 2026-01-05 01:33:37.753777 | orchestrator | changed: [testbed-manager] 2026-01-05 01:33:37.753781 | orchestrator | 2026-01-05 01:33:37.753784 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-01-05 01:33:37.753789 | orchestrator | Monday 05 January 2026 01:33:07 +0000 (0:00:02.529) 0:00:45.285 ******** 2026-01-05 01:33:37.753795 | orchestrator | changed: [testbed-manager] 2026-01-05 01:33:37.753801 | orchestrator | 2026-01-05 01:33:37.753806 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-01-05 01:33:37.753813 | orchestrator | Monday 05 January 2026 01:33:17 +0000 (0:00:10.273) 0:00:55.559 ******** 2026-01-05 01:33:37.753823 | orchestrator | changed: [testbed-manager] 2026-01-05 01:33:37.753830 | orchestrator | 2026-01-05 01:33:37.753835 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-01-05 01:33:37.753841 | orchestrator | Monday 05 January 2026 01:33:18 +0000 (0:00:00.817) 0:00:56.377 ******** 2026-01-05 01:33:37.753846 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:33:37.753852 | orchestrator | 2026-01-05 01:33:37.753858 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-01-05 01:33:37.753864 | orchestrator | Monday 05 January 2026 01:33:20 +0000 (0:00:01.527) 0:00:57.904 ******** 2026-01-05 01:33:37.753870 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:33:37.753876 | orchestrator | 2026-01-05 01:33:37.753883 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-01-05 01:33:37.753890 | orchestrator | Monday 05 January 2026 01:33:21 +0000 (0:00:01.519) 0:00:59.423 ******** 2026-01-05 01:33:37.753894 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:33:37.753898 | orchestrator | 2026-01-05 01:33:37.753902 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-01-05 01:33:37.753906 | orchestrator | Monday 05 January 2026 01:33:22 +0000 (0:00:00.212) 0:00:59.636 ******** 2026-01-05 01:33:37.753909 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:33:37.753913 | orchestrator | 2026-01-05 01:33:37.753917 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-01-05 01:33:37.753921 | orchestrator | Monday 05 January 2026 01:33:22 +0000 (0:00:00.205) 0:00:59.842 ******** 2026-01-05 01:33:37.753924 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:33:37.753928 | orchestrator | 2026-01-05 01:33:37.753932 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-01-05 01:33:37.753951 | orchestrator | Monday 05 January 2026 01:33:26 +0000 (0:00:03.850) 0:01:03.693 ******** 2026-01-05 01:33:37.753956 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-01-05 01:33:37.753960 | orchestrator |  "changed": false, 2026-01-05 01:33:37.753964 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:33:37.753968 | orchestrator | } 2026-01-05 01:33:37.753972 | orchestrator | 2026-01-05 01:33:37.753976 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-01-05 01:33:37.753979 | orchestrator | Monday 05 January 2026 01:33:26 +0000 (0:00:00.172) 0:01:03.865 ******** 2026-01-05 01:33:37.753983 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-01-05 01:33:37.753994 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-01-05 01:33:37.754000 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:33:37.754006 | orchestrator | 2026-01-05 01:33:37.754060 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-01-05 01:33:37.754070 | orchestrator | Monday 05 January 2026 01:33:26 +0000 (0:00:00.412) 0:01:04.278 ******** 2026-01-05 01:33:37.754085 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:33:37.754092 | orchestrator | 2026-01-05 01:33:37.754099 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-01-05 01:33:37.754105 | orchestrator | Monday 05 January 2026 01:33:26 +0000 (0:00:00.148) 0:01:04.426 ******** 2026-01-05 01:33:37.754111 | orchestrator | ok: [testbed-manager] 2026-01-05 01:33:37.754118 | orchestrator | 2026-01-05 01:33:37.754123 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-01-05 01:33:37.754129 | orchestrator | Monday 05 January 2026 01:33:27 +0000 (0:00:00.549) 0:01:04.975 ******** 2026-01-05 01:33:37.754136 | orchestrator | changed: [testbed-manager] 2026-01-05 01:33:37.754142 | orchestrator | 2026-01-05 01:33:37.754149 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-01-05 01:33:37.754155 | orchestrator | Monday 05 January 2026 01:33:28 +0000 (0:00:00.961) 0:01:05.937 ******** 2026-01-05 01:33:37.754161 | orchestrator | ok: [testbed-manager] 2026-01-05 01:33:37.754167 | orchestrator | 2026-01-05 01:33:37.754174 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-01-05 01:33:37.754180 | orchestrator | Monday 05 January 2026 01:33:28 +0000 (0:00:00.436) 0:01:06.373 ******** 2026-01-05 01:33:37.754186 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:33:37.754193 | orchestrator | 2026-01-05 01:33:37.754197 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-01-05 01:33:37.754200 | orchestrator | Monday 05 January 2026 01:33:28 +0000 (0:00:00.156) 0:01:06.530 ******** 2026-01-05 01:33:37.754205 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-01-05 01:33:37.754212 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-01-05 01:33:37.754218 | orchestrator | 2026-01-05 01:33:37.754225 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-01-05 01:33:37.754231 | orchestrator | Monday 05 January 2026 01:33:36 +0000 (0:00:07.731) 0:01:14.262 ******** 2026-01-05 01:33:37.754240 | orchestrator | changed: [testbed-manager] 2026-01-05 01:33:37.754246 | orchestrator | 2026-01-05 01:33:37.754251 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:33:37.754258 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:33:37.754264 | orchestrator | 2026-01-05 01:33:37.754270 | orchestrator | 2026-01-05 01:33:37.754276 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:33:37.754282 | orchestrator | Monday 05 January 2026 01:33:37 +0000 (0:00:01.071) 0:01:15.334 ******** 2026-01-05 01:33:37.754288 | orchestrator | =============================================================================== 2026-01-05 01:33:37.754294 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 22.04s 2026-01-05 01:33:37.754300 | orchestrator | osism.validations.tempest : Install qemu-utils package ----------------- 10.27s 2026-01-05 01:33:37.754306 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.75s 2026-01-05 01:33:37.754312 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.73s 2026-01-05 01:33:37.754317 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.85s 2026-01-05 01:33:37.754323 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.65s 2026-01-05 01:33:37.754329 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.56s 2026-01-05 01:33:37.754334 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.53s 2026-01-05 01:33:37.754340 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.72s 2026-01-05 01:33:37.754346 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.53s 2026-01-05 01:33:37.754351 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.52s 2026-01-05 01:33:37.754364 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.25s 2026-01-05 01:33:37.754370 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.07s 2026-01-05 01:33:37.754377 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.96s 2026-01-05 01:33:37.754388 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.82s 2026-01-05 01:33:37.754394 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.73s 2026-01-05 01:33:37.754400 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.55s 2026-01-05 01:33:37.754414 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.45s 2026-01-05 01:33:38.183949 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.44s 2026-01-05 01:33:38.184038 | orchestrator | osism.validations.tempest : Resolve flavor IDs -------------------------- 0.41s 2026-01-05 01:33:38.509518 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-01-05 01:33:38.512381 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-01-05 01:33:38.514940 | orchestrator | 2026-01-05 01:33:38.514997 | orchestrator | ## IDENTITY (API) 2026-01-05 01:33:38.515002 | orchestrator | 2026-01-05 01:33:38.515007 | orchestrator | + echo 2026-01-05 01:33:38.515011 | orchestrator | + echo '## IDENTITY (API)' 2026-01-05 01:33:38.515016 | orchestrator | + echo 2026-01-05 01:33:38.515020 | orchestrator | + _tempest tempest.api.identity.v3 2026-01-05 01:33:38.515025 | orchestrator | + local regex=tempest.api.identity.v3 2026-01-05 01:33:38.517091 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-01-05 01:33:38.517442 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-05 01:33:38.521827 | orchestrator | + tee -a /opt/tempest/20260105-0133.log 2026-01-05 01:33:42.550446 | orchestrator | 2026-01-05 01:33:42.547 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-05 01:33:42.644470 | orchestrator | 2026-01-05 01:33:42.641 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:33:42.644571 | orchestrator | 2026-01-05 01:33:42.642 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:33:42.644579 | orchestrator | 2026-01-05 01:33:42.642 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:33:42.644585 | orchestrator | 2026-01-05 01:33:42.643 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:33:42.644917 | orchestrator | 2026-01-05 01:33:42.643 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:33:42.645622 | orchestrator | 2026-01-05 01:33:42.643 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:33:42.645640 | orchestrator | 2026-01-05 01:33:42.644 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:33:42.645649 | orchestrator | 2026-01-05 01:33:42.644 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:33:42.646042 | orchestrator | 2026-01-05 01:33:42.644 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:33:42.646617 | orchestrator | 2026-01-05 01:33:42.645 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:33:42.647153 | orchestrator | 2026-01-05 01:33:42.645 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:33:42.647608 | orchestrator | 2026-01-05 01:33:42.646 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:33:42.647647 | orchestrator | 2026-01-05 01:33:42.646 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:33:42.647653 | orchestrator | 2026-01-05 01:33:42.646 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:33:42.647989 | orchestrator | 2026-01-05 01:33:42.646 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:33:42.648003 | orchestrator | 2026-01-05 01:33:42.646 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:33:42.648191 | orchestrator | 2026-01-05 01:33:42.646 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:33:42.648199 | orchestrator | 2026-01-05 01:33:42.646 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:33:42.648216 | orchestrator | 2026-01-05 01:33:42.647 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:33:42.648508 | orchestrator | 2026-01-05 01:33:42.647 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:33:42.649103 | orchestrator | 2026-01-05 01:33:42.647 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:33:42.649177 | orchestrator | 2026-01-05 01:33:42.647 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:33:56.645560 | orchestrator | 2026-01-05 01:33:56.645679 | orchestrator | ========================= 2026-01-05 01:33:56.645694 | orchestrator | Failures during discovery 2026-01-05 01:33:56.645703 | orchestrator | ========================= 2026-01-05 01:33:56.645712 | orchestrator | --- stdout --- 2026-01-05 01:33:56.645723 | orchestrator | 2026-01-05 01:33:46.229 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-05 01:33:56.645735 | orchestrator | 2026-01-05 01:33:46.230 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:33:56.645746 | orchestrator | 2026-01-05 01:33:46.230 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:33:56.645756 | orchestrator | 2026-01-05 01:33:46.230 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:33:56.645819 | orchestrator | 2026-01-05 01:33:46.231 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:33:56.645829 | orchestrator | 2026-01-05 01:33:46.231 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:33:56.645838 | orchestrator | 2026-01-05 01:33:46.231 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:33:56.645851 | orchestrator | 2026-01-05 01:33:46.231 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:33:56.645857 | orchestrator | 2026-01-05 01:33:46.231 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:33:56.645862 | orchestrator | 2026-01-05 01:33:46.232 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:33:56.645868 | orchestrator | 2026-01-05 01:33:46.232 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:33:56.645873 | orchestrator | 2026-01-05 01:33:46.232 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:33:56.645878 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:33:56.645925 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:33:56.645932 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:33:56.645937 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:33:56.645943 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:33:56.645948 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:33:56.645954 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:33:56.645959 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:33:56.645976 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:33:56.645982 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:33:56.645987 | orchestrator | 2026-01-05 01:33:46.233 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:33:56.646000 | orchestrator | 2026-01-05 01:33:46.236 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-05 01:33:56.646010 | orchestrator | 2026-01-05 01:33:47.075 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-05 01:33:56.646067 | orchestrator | 2026-01-05 01:33:47.075 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-05 01:33:56.646073 | orchestrator | 2026-01-05 01:33:47.075 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-05 01:33:56.646078 | orchestrator | 2026-01-05 01:33:47.075 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:33:56.646102 | orchestrator | 2026-01-05 01:33:47.075 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-05 01:33:56.646109 | orchestrator | 2026-01-05 01:33:47.075 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-05 01:33:56.646115 | orchestrator | 2026-01-05 01:33:47.075 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-05 01:33:56.646120 | orchestrator | 2026-01-05 01:33:47.075 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-05 01:33:56.646127 | orchestrator | 2026-01-05 01:33:47.076 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-05 01:33:56.646133 | orchestrator | 2026-01-05 01:33:47.076 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-05 01:33:56.646139 | orchestrator | 2026-01-05 01:33:47.076 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-05 01:33:56.646145 | orchestrator | --- import errors --- 2026-01-05 01:33:56.646152 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-05 01:33:56.646158 | orchestrator | Traceback (most recent call last): 2026-01-05 01:33:56.646166 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-05 01:33:56.646171 | orchestrator | module = self._get_module_from_name(name) 2026-01-05 01:33:56.646178 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-05 01:33:56.646194 | orchestrator | __import__(name) 2026-01-05 01:33:56.646200 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-05 01:33:56.646206 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-05 01:33:56.646212 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-05 01:33:56.646219 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-05 01:33:56.646224 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-05 01:33:56.646230 | orchestrator | 2026-01-05 01:33:56.646237 | orchestrator | ================================================================================ 2026-01-05 01:33:56.646243 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-05 01:33:57.176118 | orchestrator | 2026-01-05 01:33:57.176203 | orchestrator | ## IMAGE (API) 2026-01-05 01:33:57.176210 | orchestrator | 2026-01-05 01:33:57.176214 | orchestrator | + echo 2026-01-05 01:33:57.176219 | orchestrator | + echo '## IMAGE (API)' 2026-01-05 01:33:57.176228 | orchestrator | + echo 2026-01-05 01:33:57.176232 | orchestrator | + _tempest tempest.api.image.v2 2026-01-05 01:33:57.176237 | orchestrator | + local regex=tempest.api.image.v2 2026-01-05 01:33:57.176244 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-01-05 01:33:57.177238 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-05 01:33:57.179654 | orchestrator | + tee -a /opt/tempest/20260105-0133.log 2026-01-05 01:34:01.126252 | orchestrator | 2026-01-05 01:34:01.123 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-05 01:34:01.227091 | orchestrator | 2026-01-05 01:34:01.224 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:34:01.227226 | orchestrator | 2026-01-05 01:34:01.224 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:34:01.227289 | orchestrator | 2026-01-05 01:34:01.224 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:34:01.227307 | orchestrator | 2026-01-05 01:34:01.225 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:01.227326 | orchestrator | 2026-01-05 01:34:01.225 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:34:01.227373 | orchestrator | 2026-01-05 01:34:01.225 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:34:01.227391 | orchestrator | 2026-01-05 01:34:01.225 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:34:01.227401 | orchestrator | 2026-01-05 01:34:01.225 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:34:01.227416 | orchestrator | 2026-01-05 01:34:01.226 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:34:01.227714 | orchestrator | 2026-01-05 01:34:01.226 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:34:01.227899 | orchestrator | 2026-01-05 01:34:01.226 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:34:01.228619 | orchestrator | 2026-01-05 01:34:01.226 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:34:01.228651 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:34:01.228658 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:34:01.228688 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:01.228694 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:34:01.228698 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:34:01.228702 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:34:01.229102 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:34:01.229122 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:34:01.229127 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:34:01.229131 | orchestrator | 2026-01-05 01:34:01.227 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:34:14.620877 | orchestrator | 2026-01-05 01:34:14.620964 | orchestrator | ========================= 2026-01-05 01:34:14.620972 | orchestrator | Failures during discovery 2026-01-05 01:34:14.620976 | orchestrator | ========================= 2026-01-05 01:34:14.620981 | orchestrator | --- stdout --- 2026-01-05 01:34:14.620987 | orchestrator | 2026-01-05 01:34:04.763 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-05 01:34:14.620996 | orchestrator | 2026-01-05 01:34:04.764 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:34:14.621002 | orchestrator | 2026-01-05 01:34:04.764 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:34:14.621007 | orchestrator | 2026-01-05 01:34:04.765 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:34:14.621011 | orchestrator | 2026-01-05 01:34:04.765 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:14.621016 | orchestrator | 2026-01-05 01:34:04.765 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:34:14.621021 | orchestrator | 2026-01-05 01:34:04.765 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:34:14.621025 | orchestrator | 2026-01-05 01:34:04.766 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:34:14.621029 | orchestrator | 2026-01-05 01:34:04.766 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:34:14.621033 | orchestrator | 2026-01-05 01:34:04.766 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:34:14.621038 | orchestrator | 2026-01-05 01:34:04.766 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:34:14.621042 | orchestrator | 2026-01-05 01:34:04.767 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:34:14.621046 | orchestrator | 2026-01-05 01:34:04.767 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:34:14.621050 | orchestrator | 2026-01-05 01:34:04.767 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:34:14.621054 | orchestrator | 2026-01-05 01:34:04.767 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:34:14.621079 | orchestrator | 2026-01-05 01:34:04.767 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:14.621087 | orchestrator | 2026-01-05 01:34:04.767 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:34:14.621093 | orchestrator | 2026-01-05 01:34:04.767 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:34:14.621099 | orchestrator | 2026-01-05 01:34:04.767 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:34:14.621105 | orchestrator | 2026-01-05 01:34:04.767 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:34:14.621111 | orchestrator | 2026-01-05 01:34:04.768 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:34:14.621117 | orchestrator | 2026-01-05 01:34:04.768 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:34:14.621136 | orchestrator | 2026-01-05 01:34:04.768 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:34:14.621146 | orchestrator | 2026-01-05 01:34:04.770 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-05 01:34:14.621154 | orchestrator | 2026-01-05 01:34:05.651 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-05 01:34:14.621160 | orchestrator | 2026-01-05 01:34:05.651 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-05 01:34:14.621166 | orchestrator | 2026-01-05 01:34:05.651 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-05 01:34:14.621173 | orchestrator | 2026-01-05 01:34:05.651 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:14.621195 | orchestrator | 2026-01-05 01:34:05.651 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-05 01:34:14.621202 | orchestrator | 2026-01-05 01:34:05.651 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-05 01:34:14.621208 | orchestrator | 2026-01-05 01:34:05.652 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-05 01:34:14.621215 | orchestrator | 2026-01-05 01:34:05.652 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-05 01:34:14.621221 | orchestrator | 2026-01-05 01:34:05.652 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-05 01:34:14.621227 | orchestrator | 2026-01-05 01:34:05.652 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-05 01:34:14.621234 | orchestrator | 2026-01-05 01:34:05.652 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-05 01:34:14.621240 | orchestrator | --- import errors --- 2026-01-05 01:34:14.621247 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-05 01:34:14.621255 | orchestrator | Traceback (most recent call last): 2026-01-05 01:34:14.621263 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-05 01:34:14.621269 | orchestrator | module = self._get_module_from_name(name) 2026-01-05 01:34:14.621276 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-05 01:34:14.621282 | orchestrator | __import__(name) 2026-01-05 01:34:14.621289 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-05 01:34:14.621296 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-05 01:34:14.621303 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-05 01:34:14.621317 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-05 01:34:14.621325 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-05 01:34:14.621330 | orchestrator | 2026-01-05 01:34:14.621334 | orchestrator | ================================================================================ 2026-01-05 01:34:14.621338 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-05 01:34:15.010992 | orchestrator | 2026-01-05 01:34:15.011140 | orchestrator | ## NETWORK (API) 2026-01-05 01:34:15.011167 | orchestrator | 2026-01-05 01:34:15.011184 | orchestrator | + echo 2026-01-05 01:34:15.011202 | orchestrator | + echo '## NETWORK (API)' 2026-01-05 01:34:15.011221 | orchestrator | + echo 2026-01-05 01:34:15.011237 | orchestrator | + _tempest tempest.api.network 2026-01-05 01:34:15.011256 | orchestrator | + local regex=tempest.api.network 2026-01-05 01:34:15.011277 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-01-05 01:34:15.012107 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-05 01:34:15.014871 | orchestrator | + tee -a /opt/tempest/20260105-0134.log 2026-01-05 01:34:18.677409 | orchestrator | 2026-01-05 01:34:18.674 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-05 01:34:18.777684 | orchestrator | 2026-01-05 01:34:18.773 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:34:18.777856 | orchestrator | 2026-01-05 01:34:18.774 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:34:18.777888 | orchestrator | 2026-01-05 01:34:18.774 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:34:18.777909 | orchestrator | 2026-01-05 01:34:18.774 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:18.777930 | orchestrator | 2026-01-05 01:34:18.774 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:34:18.777949 | orchestrator | 2026-01-05 01:34:18.775 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:34:18.777968 | orchestrator | 2026-01-05 01:34:18.775 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:34:18.777987 | orchestrator | 2026-01-05 01:34:18.775 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:34:18.778006 | orchestrator | 2026-01-05 01:34:18.775 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:34:18.778095 | orchestrator | 2026-01-05 01:34:18.776 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:34:18.778117 | orchestrator | 2026-01-05 01:34:18.776 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:34:18.778733 | orchestrator | 2026-01-05 01:34:18.776 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:34:18.778871 | orchestrator | 2026-01-05 01:34:18.776 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:34:18.778886 | orchestrator | 2026-01-05 01:34:18.776 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:34:18.778898 | orchestrator | 2026-01-05 01:34:18.777 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:18.778909 | orchestrator | 2026-01-05 01:34:18.777 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:34:18.778946 | orchestrator | 2026-01-05 01:34:18.777 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:34:18.778957 | orchestrator | 2026-01-05 01:34:18.777 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:34:18.778967 | orchestrator | 2026-01-05 01:34:18.777 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:34:18.778978 | orchestrator | 2026-01-05 01:34:18.777 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:34:18.778988 | orchestrator | 2026-01-05 01:34:18.777 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:34:18.778998 | orchestrator | 2026-01-05 01:34:18.777 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:34:31.693809 | orchestrator | 2026-01-05 01:34:31.693928 | orchestrator | ========================= 2026-01-05 01:34:31.693938 | orchestrator | Failures during discovery 2026-01-05 01:34:31.693944 | orchestrator | ========================= 2026-01-05 01:34:31.693950 | orchestrator | --- stdout --- 2026-01-05 01:34:31.693958 | orchestrator | 2026-01-05 01:34:22.334 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-05 01:34:31.693966 | orchestrator | 2026-01-05 01:34:22.335 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:34:31.693973 | orchestrator | 2026-01-05 01:34:22.336 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:34:31.693979 | orchestrator | 2026-01-05 01:34:22.336 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:34:31.693998 | orchestrator | 2026-01-05 01:34:22.336 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:31.694004 | orchestrator | 2026-01-05 01:34:22.336 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:34:31.694010 | orchestrator | 2026-01-05 01:34:22.336 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:34:31.694053 | orchestrator | 2026-01-05 01:34:22.337 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:34:31.694059 | orchestrator | 2026-01-05 01:34:22.337 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:34:31.694064 | orchestrator | 2026-01-05 01:34:22.337 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:34:31.694070 | orchestrator | 2026-01-05 01:34:22.337 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:34:31.694076 | orchestrator | 2026-01-05 01:34:22.337 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:34:31.694081 | orchestrator | 2026-01-05 01:34:22.338 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:34:31.694087 | orchestrator | 2026-01-05 01:34:22.338 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:34:31.694092 | orchestrator | 2026-01-05 01:34:22.338 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:34:31.694099 | orchestrator | 2026-01-05 01:34:22.338 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:31.694106 | orchestrator | 2026-01-05 01:34:22.338 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:34:31.694129 | orchestrator | 2026-01-05 01:34:22.338 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:34:31.694135 | orchestrator | 2026-01-05 01:34:22.338 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:34:31.694141 | orchestrator | 2026-01-05 01:34:22.338 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:34:31.694146 | orchestrator | 2026-01-05 01:34:22.338 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:34:31.694151 | orchestrator | 2026-01-05 01:34:22.339 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:34:31.694157 | orchestrator | 2026-01-05 01:34:22.339 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:34:31.694164 | orchestrator | 2026-01-05 01:34:22.341 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-05 01:34:31.694172 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-05 01:34:31.694179 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-05 01:34:31.694188 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-05 01:34:31.694197 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:31.694223 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-05 01:34:31.694236 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-05 01:34:31.694244 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-05 01:34:31.694252 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-05 01:34:31.694261 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-05 01:34:31.694269 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-05 01:34:31.694283 | orchestrator | 2026-01-05 01:34:23.158 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-05 01:34:31.694292 | orchestrator | --- import errors --- 2026-01-05 01:34:31.694301 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-05 01:34:31.694309 | orchestrator | Traceback (most recent call last): 2026-01-05 01:34:31.694319 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-05 01:34:31.694327 | orchestrator | module = self._get_module_from_name(name) 2026-01-05 01:34:31.694337 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-05 01:34:31.694345 | orchestrator | __import__(name) 2026-01-05 01:34:31.694354 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-05 01:34:31.694364 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-05 01:34:31.694374 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-05 01:34:31.694384 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-05 01:34:31.694395 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-05 01:34:31.694406 | orchestrator | 2026-01-05 01:34:31.694417 | orchestrator | ================================================================================ 2026-01-05 01:34:31.694435 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-05 01:34:32.018734 | orchestrator | 2026-01-05 01:34:32.018815 | orchestrator | ## VOLUME (API) 2026-01-05 01:34:32.018823 | orchestrator | 2026-01-05 01:34:32.018828 | orchestrator | + echo 2026-01-05 01:34:32.018834 | orchestrator | + echo '## VOLUME (API)' 2026-01-05 01:34:32.018841 | orchestrator | + echo 2026-01-05 01:34:32.018867 | orchestrator | + _tempest tempest.api.volume 2026-01-05 01:34:32.018873 | orchestrator | + local regex=tempest.api.volume 2026-01-05 01:34:32.021758 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-01-05 01:34:32.021891 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-05 01:34:32.021906 | orchestrator | + tee -a /opt/tempest/20260105-0134.log 2026-01-05 01:34:35.719602 | orchestrator | 2026-01-05 01:34:35.716 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-05 01:34:35.821295 | orchestrator | 2026-01-05 01:34:35.818 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:34:35.821403 | orchestrator | 2026-01-05 01:34:35.819 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:34:35.821419 | orchestrator | 2026-01-05 01:34:35.819 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:34:35.821480 | orchestrator | 2026-01-05 01:34:35.819 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:35.821557 | orchestrator | 2026-01-05 01:34:35.819 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:34:35.821572 | orchestrator | 2026-01-05 01:34:35.820 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:34:35.821583 | orchestrator | 2026-01-05 01:34:35.820 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:34:35.824764 | orchestrator | 2026-01-05 01:34:35.820 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:34:35.824844 | orchestrator | 2026-01-05 01:34:35.820 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:34:35.824917 | orchestrator | 2026-01-05 01:34:35.821 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:34:35.824927 | orchestrator | 2026-01-05 01:34:35.821 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:34:35.824937 | orchestrator | 2026-01-05 01:34:35.821 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:34:35.824948 | orchestrator | 2026-01-05 01:34:35.821 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:34:35.824957 | orchestrator | 2026-01-05 01:34:35.821 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:34:35.824967 | orchestrator | 2026-01-05 01:34:35.822 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:35.824978 | orchestrator | 2026-01-05 01:34:35.822 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:34:35.824988 | orchestrator | 2026-01-05 01:34:35.822 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:34:35.824998 | orchestrator | 2026-01-05 01:34:35.822 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:34:35.825036 | orchestrator | 2026-01-05 01:34:35.822 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:34:35.825046 | orchestrator | 2026-01-05 01:34:35.822 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:34:35.825056 | orchestrator | 2026-01-05 01:34:35.822 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:34:35.825081 | orchestrator | 2026-01-05 01:34:35.822 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:34:48.909343 | orchestrator | 2026-01-05 01:34:48.910365 | orchestrator | ========================= 2026-01-05 01:34:48.910422 | orchestrator | Failures during discovery 2026-01-05 01:34:48.910430 | orchestrator | ========================= 2026-01-05 01:34:48.910437 | orchestrator | --- stdout --- 2026-01-05 01:34:48.910444 | orchestrator | 2026-01-05 01:34:39.458 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-05 01:34:48.910453 | orchestrator | 2026-01-05 01:34:39.459 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:34:48.910461 | orchestrator | 2026-01-05 01:34:39.459 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:34:48.910467 | orchestrator | 2026-01-05 01:34:39.460 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:34:48.910473 | orchestrator | 2026-01-05 01:34:39.460 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:48.910479 | orchestrator | 2026-01-05 01:34:39.460 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:34:48.910485 | orchestrator | 2026-01-05 01:34:39.460 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:34:48.910490 | orchestrator | 2026-01-05 01:34:39.460 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:34:48.910496 | orchestrator | 2026-01-05 01:34:39.460 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:34:48.910501 | orchestrator | 2026-01-05 01:34:39.461 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:34:48.910507 | orchestrator | 2026-01-05 01:34:39.461 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:34:48.910512 | orchestrator | 2026-01-05 01:34:39.461 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:34:48.910519 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:34:48.910525 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:34:48.910531 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:34:48.910536 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:48.910543 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:34:48.910548 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:34:48.910554 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:34:48.910581 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:34:48.910587 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:34:48.910592 | orchestrator | 2026-01-05 01:34:39.462 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:34:48.910597 | orchestrator | 2026-01-05 01:34:39.463 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:34:48.910605 | orchestrator | 2026-01-05 01:34:39.465 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-05 01:34:48.910613 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-05 01:34:48.910620 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-05 01:34:48.910625 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-05 01:34:48.910631 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:48.910658 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-05 01:34:48.910664 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-05 01:34:48.910669 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-05 01:34:48.910675 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-05 01:34:48.910680 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-05 01:34:48.910685 | orchestrator | 2026-01-05 01:34:40.308 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-05 01:34:48.910691 | orchestrator | 2026-01-05 01:34:40.309 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-05 01:34:48.910701 | orchestrator | --- import errors --- 2026-01-05 01:34:48.910710 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-05 01:34:48.910720 | orchestrator | Traceback (most recent call last): 2026-01-05 01:34:48.910730 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-05 01:34:48.910739 | orchestrator | module = self._get_module_from_name(name) 2026-01-05 01:34:48.910766 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-05 01:34:48.910776 | orchestrator | __import__(name) 2026-01-05 01:34:48.910784 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-05 01:34:48.910796 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-05 01:34:48.910804 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-05 01:34:48.910813 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-05 01:34:48.910821 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-05 01:34:48.910830 | orchestrator | 2026-01-05 01:34:48.910843 | orchestrator | ================================================================================ 2026-01-05 01:34:48.910851 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-05 01:34:49.248739 | orchestrator | 2026-01-05 01:34:49.248975 | orchestrator | ## COMPUTE (API) 2026-01-05 01:34:49.248998 | orchestrator | 2026-01-05 01:34:49.249012 | orchestrator | + echo 2026-01-05 01:34:49.249026 | orchestrator | + echo '## COMPUTE (API)' 2026-01-05 01:34:49.249093 | orchestrator | + echo 2026-01-05 01:34:49.249107 | orchestrator | + _tempest tempest.api.compute 2026-01-05 01:34:49.249120 | orchestrator | + local regex=tempest.api.compute 2026-01-05 01:34:49.249235 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-01-05 01:34:49.249802 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-05 01:34:49.253698 | orchestrator | + tee -a /opt/tempest/20260105-0134.log 2026-01-05 01:34:53.006679 | orchestrator | 2026-01-05 01:34:53.003 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-05 01:34:53.104086 | orchestrator | 2026-01-05 01:34:53.101 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:34:53.104475 | orchestrator | 2026-01-05 01:34:53.101 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:34:53.104538 | orchestrator | 2026-01-05 01:34:53.101 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:34:53.104551 | orchestrator | 2026-01-05 01:34:53.102 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:53.104563 | orchestrator | 2026-01-05 01:34:53.102 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:34:53.104575 | orchestrator | 2026-01-05 01:34:53.102 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:34:53.104586 | orchestrator | 2026-01-05 01:34:53.102 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:34:53.104597 | orchestrator | 2026-01-05 01:34:53.102 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:34:53.104652 | orchestrator | 2026-01-05 01:34:53.102 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:34:53.105532 | orchestrator | 2026-01-05 01:34:53.103 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:34:53.105563 | orchestrator | 2026-01-05 01:34:53.103 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:34:53.105575 | orchestrator | 2026-01-05 01:34:53.103 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:34:53.105586 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:34:53.105596 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:34:53.106192 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:34:53.106216 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:34:53.106227 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:34:53.106238 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:34:53.106248 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:34:53.106258 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:34:53.106294 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:34:53.106301 | orchestrator | 2026-01-05 01:34:53.104 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:35:06.272008 | orchestrator | 2026-01-05 01:35:06.272100 | orchestrator | ========================= 2026-01-05 01:35:06.272108 | orchestrator | Failures during discovery 2026-01-05 01:35:06.272113 | orchestrator | ========================= 2026-01-05 01:35:06.272118 | orchestrator | --- stdout --- 2026-01-05 01:35:06.272125 | orchestrator | 2026-01-05 01:34:56.769 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-05 01:35:06.272132 | orchestrator | 2026-01-05 01:34:56.770 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:35:06.272138 | orchestrator | 2026-01-05 01:34:56.771 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:35:06.272144 | orchestrator | 2026-01-05 01:34:56.771 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:35:06.272148 | orchestrator | 2026-01-05 01:34:56.771 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:06.272154 | orchestrator | 2026-01-05 01:34:56.771 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:35:06.272159 | orchestrator | 2026-01-05 01:34:56.772 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:35:06.272163 | orchestrator | 2026-01-05 01:34:56.772 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:35:06.272168 | orchestrator | 2026-01-05 01:34:56.772 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:35:06.272173 | orchestrator | 2026-01-05 01:34:56.772 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:35:06.272190 | orchestrator | 2026-01-05 01:34:56.773 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:35:06.272195 | orchestrator | 2026-01-05 01:34:56.773 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:35:06.272199 | orchestrator | 2026-01-05 01:34:56.773 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:35:06.272204 | orchestrator | 2026-01-05 01:34:56.773 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:35:06.272208 | orchestrator | 2026-01-05 01:34:56.773 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:35:06.272212 | orchestrator | 2026-01-05 01:34:56.773 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:06.272218 | orchestrator | 2026-01-05 01:34:56.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:35:06.272222 | orchestrator | 2026-01-05 01:34:56.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:35:06.272227 | orchestrator | 2026-01-05 01:34:56.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:35:06.272231 | orchestrator | 2026-01-05 01:34:56.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:35:06.272235 | orchestrator | 2026-01-05 01:34:56.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:35:06.272240 | orchestrator | 2026-01-05 01:34:56.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:35:06.272259 | orchestrator | 2026-01-05 01:34:56.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:35:06.272266 | orchestrator | 2026-01-05 01:34:56.776 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-05 01:35:06.272272 | orchestrator | 2026-01-05 01:34:57.584 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-05 01:35:06.272277 | orchestrator | 2026-01-05 01:34:57.584 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-05 01:35:06.272281 | orchestrator | 2026-01-05 01:34:57.584 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-05 01:35:06.272286 | orchestrator | 2026-01-05 01:34:57.584 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:06.272303 | orchestrator | 2026-01-05 01:34:57.584 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-05 01:35:06.272308 | orchestrator | 2026-01-05 01:34:57.585 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-05 01:35:06.272312 | orchestrator | 2026-01-05 01:34:57.585 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-05 01:35:06.272316 | orchestrator | 2026-01-05 01:34:57.585 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-05 01:35:06.272320 | orchestrator | 2026-01-05 01:34:57.585 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-05 01:35:06.272325 | orchestrator | 2026-01-05 01:34:57.585 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-05 01:35:06.272329 | orchestrator | 2026-01-05 01:34:57.585 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-05 01:35:06.272334 | orchestrator | --- import errors --- 2026-01-05 01:35:06.272348 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-05 01:35:06.272353 | orchestrator | Traceback (most recent call last): 2026-01-05 01:35:06.272363 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-05 01:35:06.272368 | orchestrator | module = self._get_module_from_name(name) 2026-01-05 01:35:06.272373 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-05 01:35:06.272378 | orchestrator | __import__(name) 2026-01-05 01:35:06.272382 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-05 01:35:06.272386 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-05 01:35:06.272391 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-05 01:35:06.272395 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-05 01:35:06.272399 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-05 01:35:06.272404 | orchestrator | 2026-01-05 01:35:06.272408 | orchestrator | ================================================================================ 2026-01-05 01:35:06.272413 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-05 01:35:06.770425 | orchestrator | 2026-01-05 01:35:06.770521 | orchestrator | ## DNS (API) 2026-01-05 01:35:06.770529 | orchestrator | 2026-01-05 01:35:06.770534 | orchestrator | + echo 2026-01-05 01:35:06.770540 | orchestrator | + echo '## DNS (API)' 2026-01-05 01:35:06.770545 | orchestrator | + echo 2026-01-05 01:35:06.770551 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-01-05 01:35:06.770557 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-01-05 01:35:06.770947 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-01-05 01:35:06.773108 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-05 01:35:06.778200 | orchestrator | + tee -a /opt/tempest/20260105-0135.log 2026-01-05 01:35:10.761837 | orchestrator | 2026-01-05 01:35:10.757 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-05 01:35:10.854456 | orchestrator | 2026-01-05 01:35:10.851 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:35:10.854542 | orchestrator | 2026-01-05 01:35:10.852 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:35:10.854550 | orchestrator | 2026-01-05 01:35:10.852 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:35:10.854556 | orchestrator | 2026-01-05 01:35:10.852 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:10.854562 | orchestrator | 2026-01-05 01:35:10.852 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:35:10.855023 | orchestrator | 2026-01-05 01:35:10.853 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:35:10.855579 | orchestrator | 2026-01-05 01:35:10.853 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:35:10.855607 | orchestrator | 2026-01-05 01:35:10.853 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:35:10.855617 | orchestrator | 2026-01-05 01:35:10.854 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:35:10.856162 | orchestrator | 2026-01-05 01:35:10.854 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:35:10.856182 | orchestrator | 2026-01-05 01:35:10.854 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:35:10.856819 | orchestrator | 2026-01-05 01:35:10.855 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:35:10.856943 | orchestrator | 2026-01-05 01:35:10.855 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:35:10.856960 | orchestrator | 2026-01-05 01:35:10.855 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:35:10.857001 | orchestrator | 2026-01-05 01:35:10.855 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:10.857271 | orchestrator | 2026-01-05 01:35:10.855 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:35:10.857281 | orchestrator | 2026-01-05 01:35:10.855 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:35:10.857285 | orchestrator | 2026-01-05 01:35:10.855 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:35:10.857289 | orchestrator | 2026-01-05 01:35:10.856 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:35:10.857571 | orchestrator | 2026-01-05 01:35:10.856 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:35:10.857606 | orchestrator | 2026-01-05 01:35:10.856 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:35:10.857613 | orchestrator | 2026-01-05 01:35:10.856 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:35:23.393240 | orchestrator | 2026-01-05 01:35:23.393348 | orchestrator | ========================= 2026-01-05 01:35:23.393364 | orchestrator | Failures during discovery 2026-01-05 01:35:23.393374 | orchestrator | ========================= 2026-01-05 01:35:23.393383 | orchestrator | --- stdout --- 2026-01-05 01:35:23.393394 | orchestrator | 2026-01-05 01:35:14.430 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-05 01:35:23.393405 | orchestrator | 2026-01-05 01:35:14.431 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:35:23.393416 | orchestrator | 2026-01-05 01:35:14.431 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:35:23.393426 | orchestrator | 2026-01-05 01:35:14.432 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:35:23.393435 | orchestrator | 2026-01-05 01:35:14.432 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:23.393444 | orchestrator | 2026-01-05 01:35:14.432 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:35:23.393453 | orchestrator | 2026-01-05 01:35:14.432 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:35:23.393463 | orchestrator | 2026-01-05 01:35:14.432 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:35:23.393472 | orchestrator | 2026-01-05 01:35:14.433 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:35:23.393480 | orchestrator | 2026-01-05 01:35:14.433 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:35:23.393489 | orchestrator | 2026-01-05 01:35:14.433 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:35:23.393499 | orchestrator | 2026-01-05 01:35:14.433 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:35:23.393509 | orchestrator | 2026-01-05 01:35:14.434 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:35:23.393520 | orchestrator | 2026-01-05 01:35:14.434 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:35:23.393531 | orchestrator | 2026-01-05 01:35:14.434 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:35:23.393541 | orchestrator | 2026-01-05 01:35:14.434 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:23.393553 | orchestrator | 2026-01-05 01:35:14.434 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:35:23.393564 | orchestrator | 2026-01-05 01:35:14.434 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:35:23.393574 | orchestrator | 2026-01-05 01:35:14.434 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:35:23.393584 | orchestrator | 2026-01-05 01:35:14.434 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:35:23.393595 | orchestrator | 2026-01-05 01:35:14.434 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:35:23.393605 | orchestrator | 2026-01-05 01:35:14.435 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:35:23.393616 | orchestrator | 2026-01-05 01:35:14.435 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:35:23.393657 | orchestrator | 2026-01-05 01:35:14.437 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-05 01:35:23.393670 | orchestrator | 2026-01-05 01:35:15.268 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-05 01:35:23.393681 | orchestrator | 2026-01-05 01:35:15.268 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-05 01:35:23.393691 | orchestrator | 2026-01-05 01:35:15.269 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-05 01:35:23.393701 | orchestrator | 2026-01-05 01:35:15.269 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:23.393731 | orchestrator | 2026-01-05 01:35:15.269 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-05 01:35:23.393742 | orchestrator | 2026-01-05 01:35:15.269 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-05 01:35:23.393752 | orchestrator | 2026-01-05 01:35:15.269 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-05 01:35:23.393764 | orchestrator | 2026-01-05 01:35:15.269 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-05 01:35:23.393776 | orchestrator | 2026-01-05 01:35:15.269 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-05 01:35:23.393789 | orchestrator | 2026-01-05 01:35:15.269 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-05 01:35:23.393801 | orchestrator | 2026-01-05 01:35:15.269 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-05 01:35:23.393814 | orchestrator | --- import errors --- 2026-01-05 01:35:23.393827 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-05 01:35:23.393837 | orchestrator | Traceback (most recent call last): 2026-01-05 01:35:23.393848 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-05 01:35:23.393858 | orchestrator | module = self._get_module_from_name(name) 2026-01-05 01:35:23.393869 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-05 01:35:23.393882 | orchestrator | __import__(name) 2026-01-05 01:35:23.393894 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-05 01:35:23.393907 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-05 01:35:23.393919 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-05 01:35:23.393929 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-05 01:35:23.393961 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-05 01:35:23.393971 | orchestrator | 2026-01-05 01:35:23.393980 | orchestrator | ================================================================================ 2026-01-05 01:35:23.393990 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-05 01:35:23.719734 | orchestrator | 2026-01-05 01:35:23.719827 | orchestrator | ## OBJECT-STORE (API) 2026-01-05 01:35:23.719838 | orchestrator | 2026-01-05 01:35:23.719845 | orchestrator | + echo 2026-01-05 01:35:23.719853 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-01-05 01:35:23.719860 | orchestrator | + echo 2026-01-05 01:35:23.719866 | orchestrator | + _tempest tempest.api.object_storage 2026-01-05 01:35:23.719874 | orchestrator | + local regex=tempest.api.object_storage 2026-01-05 01:35:23.720294 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-01-05 01:35:23.721520 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-05 01:35:23.725831 | orchestrator | + tee -a /opt/tempest/20260105-0135.log 2026-01-05 01:35:27.388745 | orchestrator | 2026-01-05 01:35:27.386 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-05 01:35:27.485478 | orchestrator | 2026-01-05 01:35:27.482 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:35:27.485700 | orchestrator | 2026-01-05 01:35:27.483 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:35:27.485722 | orchestrator | 2026-01-05 01:35:27.483 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:35:27.485734 | orchestrator | 2026-01-05 01:35:27.483 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:27.485743 | orchestrator | 2026-01-05 01:35:27.483 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:35:27.485763 | orchestrator | 2026-01-05 01:35:27.483 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:35:27.485790 | orchestrator | 2026-01-05 01:35:27.484 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:35:27.485799 | orchestrator | 2026-01-05 01:35:27.484 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:35:27.486456 | orchestrator | 2026-01-05 01:35:27.484 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:35:27.486508 | orchestrator | 2026-01-05 01:35:27.484 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:35:27.486525 | orchestrator | 2026-01-05 01:35:27.484 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:35:27.487304 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:35:27.487362 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:35:27.487373 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:35:27.487393 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:27.487407 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:35:27.487414 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:35:27.487421 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:35:27.487428 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:35:27.487436 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:35:27.487444 | orchestrator | 2026-01-05 01:35:27.485 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:35:27.487452 | orchestrator | 2026-01-05 01:35:27.486 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:35:39.121532 | orchestrator | 2026-01-05 01:35:39.121620 | orchestrator | ========================= 2026-01-05 01:35:39.121627 | orchestrator | Failures during discovery 2026-01-05 01:35:39.121632 | orchestrator | ========================= 2026-01-05 01:35:39.121637 | orchestrator | --- stdout --- 2026-01-05 01:35:39.121643 | orchestrator | 2026-01-05 01:35:31.051 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-05 01:35:39.121695 | orchestrator | 2026-01-05 01:35:31.052 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-05 01:35:39.121713 | orchestrator | 2026-01-05 01:35:31.052 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-05 01:35:39.121721 | orchestrator | 2026-01-05 01:35:31.053 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-05 01:35:39.121728 | orchestrator | 2026-01-05 01:35:31.053 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:39.121735 | orchestrator | 2026-01-05 01:35:31.053 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-05 01:35:39.121742 | orchestrator | 2026-01-05 01:35:31.053 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-05 01:35:39.121747 | orchestrator | 2026-01-05 01:35:31.053 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-05 01:35:39.121754 | orchestrator | 2026-01-05 01:35:31.054 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-05 01:35:39.121760 | orchestrator | 2026-01-05 01:35:31.054 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-05 01:35:39.121767 | orchestrator | 2026-01-05 01:35:31.054 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-05 01:35:39.121773 | orchestrator | 2026-01-05 01:35:31.054 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-05 01:35:39.121780 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-05 01:35:39.121786 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-05 01:35:39.121791 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-05 01:35:39.121797 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:39.121805 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-05 01:35:39.121810 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-05 01:35:39.121816 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-05 01:35:39.121822 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-05 01:35:39.121828 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-05 01:35:39.121834 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-05 01:35:39.121840 | orchestrator | 2026-01-05 01:35:31.055 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-05 01:35:39.121850 | orchestrator | 2026-01-05 01:35:31.058 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-05 01:35:39.121858 | orchestrator | 2026-01-05 01:35:31.875 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-05 01:35:39.121874 | orchestrator | 2026-01-05 01:35:31.875 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-05 01:35:39.121881 | orchestrator | 2026-01-05 01:35:31.875 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-05 01:35:39.121887 | orchestrator | 2026-01-05 01:35:31.875 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-05 01:35:39.121910 | orchestrator | 2026-01-05 01:35:31.875 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-05 01:35:39.121917 | orchestrator | 2026-01-05 01:35:31.876 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-05 01:35:39.121924 | orchestrator | 2026-01-05 01:35:31.876 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-05 01:35:39.121930 | orchestrator | 2026-01-05 01:35:31.876 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-05 01:35:39.121937 | orchestrator | 2026-01-05 01:35:31.876 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-05 01:35:39.121943 | orchestrator | 2026-01-05 01:35:31.876 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-05 01:35:39.121948 | orchestrator | 2026-01-05 01:35:31.876 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-05 01:35:39.121954 | orchestrator | --- import errors --- 2026-01-05 01:35:39.121990 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-05 01:35:39.121996 | orchestrator | Traceback (most recent call last): 2026-01-05 01:35:39.122070 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-05 01:35:39.122080 | orchestrator | module = self._get_module_from_name(name) 2026-01-05 01:35:39.122086 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-05 01:35:39.122090 | orchestrator | __import__(name) 2026-01-05 01:35:39.122095 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-05 01:35:39.122099 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-05 01:35:39.122104 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-05 01:35:39.122109 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-05 01:35:39.122114 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-05 01:35:39.122118 | orchestrator | 2026-01-05 01:35:39.122123 | orchestrator | ================================================================================ 2026-01-05 01:35:39.122127 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-05 01:35:40.044588 | orchestrator | ok: Runtime: 0:03:33.871206 2026-01-05 01:35:40.075373 | 2026-01-05 01:35:40.075622 | TASK [Check prometheus alert status] 2026-01-05 01:35:40.619668 | orchestrator | skipping: Conditional result was False 2026-01-05 01:35:40.623100 | 2026-01-05 01:35:40.623260 | PLAY RECAP 2026-01-05 01:35:40.623399 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-01-05 01:35:40.623458 | 2026-01-05 01:35:40.877429 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-05 01:35:40.878583 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-05 01:35:41.684780 | 2026-01-05 01:35:41.684961 | PLAY [Post output play] 2026-01-05 01:35:41.702272 | 2026-01-05 01:35:41.702433 | LOOP [stage-output : Register sources] 2026-01-05 01:35:41.773968 | 2026-01-05 01:35:41.774327 | TASK [stage-output : Check sudo] 2026-01-05 01:35:42.635611 | orchestrator | sudo: a password is required 2026-01-05 01:35:42.813995 | orchestrator | ok: Runtime: 0:00:00.012580 2026-01-05 01:35:42.826748 | 2026-01-05 01:35:42.826928 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-05 01:35:42.869167 | 2026-01-05 01:35:42.869525 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-05 01:35:42.951485 | orchestrator | ok 2026-01-05 01:35:42.961197 | 2026-01-05 01:35:42.961356 | LOOP [stage-output : Ensure target folders exist] 2026-01-05 01:35:43.465208 | orchestrator | ok: "docs" 2026-01-05 01:35:43.465523 | 2026-01-05 01:35:43.750904 | orchestrator | ok: "artifacts" 2026-01-05 01:35:44.019335 | orchestrator | ok: "logs" 2026-01-05 01:35:44.043764 | 2026-01-05 01:35:44.043983 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-05 01:35:44.084373 | 2026-01-05 01:35:44.084747 | TASK [stage-output : Make all log files readable] 2026-01-05 01:35:44.394013 | orchestrator | ok 2026-01-05 01:35:44.403157 | 2026-01-05 01:35:44.403311 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-05 01:35:44.449289 | orchestrator | skipping: Conditional result was False 2026-01-05 01:35:44.468731 | 2026-01-05 01:35:44.468931 | TASK [stage-output : Discover log files for compression] 2026-01-05 01:35:44.494992 | orchestrator | skipping: Conditional result was False 2026-01-05 01:35:44.508272 | 2026-01-05 01:35:44.508412 | LOOP [stage-output : Archive everything from logs] 2026-01-05 01:35:44.551095 | 2026-01-05 01:35:44.551267 | PLAY [Post cleanup play] 2026-01-05 01:35:44.561261 | 2026-01-05 01:35:44.561410 | TASK [Set cloud fact (Zuul deployment)] 2026-01-05 01:35:44.631457 | orchestrator | ok 2026-01-05 01:35:44.643356 | 2026-01-05 01:35:44.643594 | TASK [Set cloud fact (local deployment)] 2026-01-05 01:35:44.689638 | orchestrator | skipping: Conditional result was False 2026-01-05 01:35:44.699914 | 2026-01-05 01:35:44.700061 | TASK [Clean the cloud environment] 2026-01-05 01:35:45.393700 | orchestrator | 2026-01-05 01:35:45 - clean up servers 2026-01-05 01:35:46.227896 | orchestrator | 2026-01-05 01:35:46 - testbed-manager 2026-01-05 01:35:46.314761 | orchestrator | 2026-01-05 01:35:46 - testbed-node-0 2026-01-05 01:35:46.396799 | orchestrator | 2026-01-05 01:35:46 - testbed-node-2 2026-01-05 01:35:46.491741 | orchestrator | 2026-01-05 01:35:46 - testbed-node-3 2026-01-05 01:35:46.583593 | orchestrator | 2026-01-05 01:35:46 - testbed-node-4 2026-01-05 01:35:46.683784 | orchestrator | 2026-01-05 01:35:46 - testbed-node-5 2026-01-05 01:35:46.786679 | orchestrator | 2026-01-05 01:35:46 - testbed-node-1 2026-01-05 01:35:46.877612 | orchestrator | 2026-01-05 01:35:46 - clean up keypairs 2026-01-05 01:35:46.892145 | orchestrator | 2026-01-05 01:35:46 - testbed 2026-01-05 01:35:46.918763 | orchestrator | 2026-01-05 01:35:46 - wait for servers to be gone 2026-01-05 01:35:55.726510 | orchestrator | 2026-01-05 01:35:55 - clean up ports 2026-01-05 01:35:55.923201 | orchestrator | 2026-01-05 01:35:55 - 00ec9ff0-4207-42e9-81fb-457363579b78 2026-01-05 01:35:56.198641 | orchestrator | 2026-01-05 01:35:56 - 244e9f84-abae-4d5b-8292-79b28165f23c 2026-01-05 01:35:56.480444 | orchestrator | 2026-01-05 01:35:56 - 534c6c66-3d01-48a5-821f-01231b792d40 2026-01-05 01:35:56.691244 | orchestrator | 2026-01-05 01:35:56 - 710fa76e-fc82-423f-a233-78758952a41a 2026-01-05 01:35:56.937823 | orchestrator | 2026-01-05 01:35:56 - 8aeec703-208d-41a1-96b3-3d3ac9755dbf 2026-01-05 01:35:57.333121 | orchestrator | 2026-01-05 01:35:57 - a1404bd9-8dbe-4e48-b20a-ca065fc5dac2 2026-01-05 01:35:57.533380 | orchestrator | 2026-01-05 01:35:57 - cfc5bbba-97c3-434e-b9d8-11a2f685ae39 2026-01-05 01:35:57.782580 | orchestrator | 2026-01-05 01:35:57 - clean up volumes 2026-01-05 01:35:57.890066 | orchestrator | 2026-01-05 01:35:57 - testbed-volume-2-node-base 2026-01-05 01:35:57.929732 | orchestrator | 2026-01-05 01:35:57 - testbed-volume-4-node-base 2026-01-05 01:35:57.970251 | orchestrator | 2026-01-05 01:35:57 - testbed-volume-1-node-base 2026-01-05 01:35:58.013366 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-5-node-base 2026-01-05 01:35:58.055937 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-3-node-base 2026-01-05 01:35:58.098699 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-0-node-base 2026-01-05 01:35:58.140140 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-manager-base 2026-01-05 01:35:58.188791 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-7-node-4 2026-01-05 01:35:58.229056 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-2-node-5 2026-01-05 01:35:58.270669 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-0-node-3 2026-01-05 01:35:58.311924 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-1-node-4 2026-01-05 01:35:58.353727 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-8-node-5 2026-01-05 01:35:58.392891 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-4-node-4 2026-01-05 01:35:58.433906 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-6-node-3 2026-01-05 01:35:58.476753 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-3-node-3 2026-01-05 01:35:58.514546 | orchestrator | 2026-01-05 01:35:58 - testbed-volume-5-node-5 2026-01-05 01:35:58.554551 | orchestrator | 2026-01-05 01:35:58 - disconnect routers 2026-01-05 01:35:58.672254 | orchestrator | 2026-01-05 01:35:58 - testbed 2026-01-05 01:35:59.662946 | orchestrator | 2026-01-05 01:35:59 - clean up subnets 2026-01-05 01:35:59.720269 | orchestrator | 2026-01-05 01:35:59 - subnet-testbed-management 2026-01-05 01:35:59.897804 | orchestrator | 2026-01-05 01:35:59 - clean up networks 2026-01-05 01:36:00.085653 | orchestrator | 2026-01-05 01:36:00 - net-testbed-management 2026-01-05 01:36:00.385925 | orchestrator | 2026-01-05 01:36:00 - clean up security groups 2026-01-05 01:36:00.425054 | orchestrator | 2026-01-05 01:36:00 - testbed-node 2026-01-05 01:36:00.529598 | orchestrator | 2026-01-05 01:36:00 - testbed-management 2026-01-05 01:36:00.650116 | orchestrator | 2026-01-05 01:36:00 - clean up floating ips 2026-01-05 01:36:00.694985 | orchestrator | 2026-01-05 01:36:00 - 81.163.192.35 2026-01-05 01:36:01.036461 | orchestrator | 2026-01-05 01:36:01 - clean up routers 2026-01-05 01:36:01.136399 | orchestrator | 2026-01-05 01:36:01 - testbed 2026-01-05 01:36:02.256228 | orchestrator | ok: Runtime: 0:00:17.022999 2026-01-05 01:36:02.260806 | 2026-01-05 01:36:02.260982 | PLAY RECAP 2026-01-05 01:36:02.261137 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-05 01:36:02.261208 | 2026-01-05 01:36:02.425570 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-05 01:36:02.426733 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-05 01:36:03.222781 | 2026-01-05 01:36:03.223040 | PLAY [Cleanup play] 2026-01-05 01:36:03.239391 | 2026-01-05 01:36:03.239527 | TASK [Set cloud fact (Zuul deployment)] 2026-01-05 01:36:03.298186 | orchestrator | ok 2026-01-05 01:36:03.308250 | 2026-01-05 01:36:03.308422 | TASK [Set cloud fact (local deployment)] 2026-01-05 01:36:03.335340 | orchestrator | skipping: Conditional result was False 2026-01-05 01:36:03.353604 | 2026-01-05 01:36:03.353789 | TASK [Clean the cloud environment] 2026-01-05 01:36:04.597282 | orchestrator | 2026-01-05 01:36:04 - clean up servers 2026-01-05 01:36:05.072887 | orchestrator | 2026-01-05 01:36:05 - clean up keypairs 2026-01-05 01:36:05.091122 | orchestrator | 2026-01-05 01:36:05 - wait for servers to be gone 2026-01-05 01:36:05.137665 | orchestrator | 2026-01-05 01:36:05 - clean up ports 2026-01-05 01:36:05.222960 | orchestrator | 2026-01-05 01:36:05 - clean up volumes 2026-01-05 01:36:05.294674 | orchestrator | 2026-01-05 01:36:05 - disconnect routers 2026-01-05 01:36:05.325614 | orchestrator | 2026-01-05 01:36:05 - clean up subnets 2026-01-05 01:36:05.353516 | orchestrator | 2026-01-05 01:36:05 - clean up networks 2026-01-05 01:36:05.510980 | orchestrator | 2026-01-05 01:36:05 - clean up security groups 2026-01-05 01:36:05.548454 | orchestrator | 2026-01-05 01:36:05 - clean up floating ips 2026-01-05 01:36:05.571630 | orchestrator | 2026-01-05 01:36:05 - clean up routers 2026-01-05 01:36:05.896515 | orchestrator | ok: Runtime: 0:00:01.444370 2026-01-05 01:36:05.900442 | 2026-01-05 01:36:05.900648 | PLAY RECAP 2026-01-05 01:36:05.900894 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-05 01:36:05.900983 | 2026-01-05 01:36:06.053377 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-05 01:36:06.055644 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-05 01:36:06.865926 | 2026-01-05 01:36:06.866124 | PLAY [Base post-fetch] 2026-01-05 01:36:06.888236 | 2026-01-05 01:36:06.888446 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-05 01:36:06.945748 | orchestrator | skipping: Conditional result was False 2026-01-05 01:36:06.952744 | 2026-01-05 01:36:06.952927 | TASK [fetch-output : Set log path for single node] 2026-01-05 01:36:06.993199 | orchestrator | ok 2026-01-05 01:36:06.999461 | 2026-01-05 01:36:06.999629 | LOOP [fetch-output : Ensure local output dirs] 2026-01-05 01:36:07.506054 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/aa3aa9c6cbca4062aefd45b6f753f4dc/work/logs" 2026-01-05 01:36:07.782922 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/aa3aa9c6cbca4062aefd45b6f753f4dc/work/artifacts" 2026-01-05 01:36:08.055736 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/aa3aa9c6cbca4062aefd45b6f753f4dc/work/docs" 2026-01-05 01:36:08.072415 | 2026-01-05 01:36:08.072614 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-05 01:36:08.959033 | orchestrator | changed: .d..t...... ./ 2026-01-05 01:36:08.959261 | orchestrator | changed: All items complete 2026-01-05 01:36:08.959294 | 2026-01-05 01:36:09.718292 | orchestrator | changed: .d..t...... ./ 2026-01-05 01:36:10.463607 | orchestrator | changed: .d..t...... ./ 2026-01-05 01:36:10.487688 | 2026-01-05 01:36:10.487855 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-05 01:36:10.525340 | orchestrator | skipping: Conditional result was False 2026-01-05 01:36:10.532709 | orchestrator | skipping: Conditional result was False 2026-01-05 01:36:10.553886 | 2026-01-05 01:36:10.554041 | PLAY RECAP 2026-01-05 01:36:10.554185 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-05 01:36:10.554231 | 2026-01-05 01:36:10.712688 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-05 01:36:10.714109 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-05 01:36:11.576257 | 2026-01-05 01:36:11.576434 | PLAY [Base post] 2026-01-05 01:36:11.593411 | 2026-01-05 01:36:11.593585 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-05 01:36:12.657199 | orchestrator | changed 2026-01-05 01:36:12.665323 | 2026-01-05 01:36:12.665461 | PLAY RECAP 2026-01-05 01:36:12.665527 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-05 01:36:12.665632 | 2026-01-05 01:36:12.803860 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-05 01:36:12.805490 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-05 01:36:13.602460 | 2026-01-05 01:36:13.602659 | PLAY [Base post-logs] 2026-01-05 01:36:13.613780 | 2026-01-05 01:36:13.613940 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-05 01:36:14.130815 | localhost | changed 2026-01-05 01:36:14.145204 | 2026-01-05 01:36:14.145387 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-05 01:36:14.183327 | localhost | ok 2026-01-05 01:36:14.188975 | 2026-01-05 01:36:14.189196 | TASK [Set zuul-log-path fact] 2026-01-05 01:36:14.205779 | localhost | ok 2026-01-05 01:36:14.216073 | 2026-01-05 01:36:14.216215 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-05 01:36:14.243012 | localhost | ok 2026-01-05 01:36:14.246446 | 2026-01-05 01:36:14.246586 | TASK [upload-logs : Create log directories] 2026-01-05 01:36:14.773138 | localhost | changed 2026-01-05 01:36:14.778254 | 2026-01-05 01:36:14.778440 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-05 01:36:15.337726 | localhost -> localhost | ok: Runtime: 0:00:00.008118 2026-01-05 01:36:15.350324 | 2026-01-05 01:36:15.350617 | TASK [upload-logs : Upload logs to log server] 2026-01-05 01:36:15.949831 | localhost | Output suppressed because no_log was given 2026-01-05 01:36:15.954592 | 2026-01-05 01:36:15.954819 | LOOP [upload-logs : Compress console log and json output] 2026-01-05 01:36:16.025664 | localhost | skipping: Conditional result was False 2026-01-05 01:36:16.032895 | localhost | skipping: Conditional result was False 2026-01-05 01:36:16.037122 | 2026-01-05 01:36:16.037257 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-05 01:36:16.104927 | localhost | skipping: Conditional result was False 2026-01-05 01:36:16.105267 | 2026-01-05 01:36:16.114177 | localhost | skipping: Conditional result was False 2026-01-05 01:36:16.129757 | 2026-01-05 01:36:16.130085 | LOOP [upload-logs : Upload console log and json output]